0% found this document useful (0 votes)
347 views42 pages

Sci and Tec Readings GP PDF

The document discusses 10 emerging technologies that were selected by Bill Gates as breakthrough technologies that could improve lives, including robot dexterity for manufacturing, new nuclear power designs, detecting preterm birth through blood tests, pill-based medical devices, and personalized cancer vaccines. These technologies aim to benefit society by improving manufacturing, providing safer energy, helping babies and reducing malnutrition, enabling low-cost healthcare, and offering more effective cancer treatment. If successfully developed, they could materially enhance health, productivity and well-being.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
347 views42 pages

Sci and Tec Readings GP PDF

The document discusses 10 emerging technologies that were selected by Bill Gates as breakthrough technologies that could improve lives, including robot dexterity for manufacturing, new nuclear power designs, detecting preterm birth through blood tests, pill-based medical devices, and personalized cancer vaccines. These technologies aim to benefit society by improving manufacturing, providing safer energy, helping babies and reducing malnutrition, enabling low-cost healthcare, and offering more effective cancer treatment. If successfully developed, they could materially enhance health, productivity and well-being.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 42

HCI C2 General Paper (2020)

Science and Technology


Can the use of animals
In an age of rapid technological advancement,
‘A’ Level for scientific research
is a single career for life realistic?(18)
ever be justified?(17)

How far has ‘Longer life expectancy created


more problems than benefits.’ Discuss.(16) To what extent is
modern technology
artificial intelligence
made it necessary for
How far is replacing the role of
individuals to possess
science fiction humans?(19)
mathematical skills?(16) Consider the view that
becoming fact? (17) we do not take enough
‘Human need, rather than profit, responsibility for our ‘Science is the only
should always be the main concern of own wellbeing.(18) answer to global
scientific research.’ Discuss.(16) hunger.’ Discuss.(19)

‘The danger of modern science is that instead of


teaching mankind humility, it has made us ‘All scientific research ‘Medical science
supremely arrogant.’(16) which raises should focus on the
controversial ethical problems of the poor
‘Recent developments issues should be instead of the dreams
in science or
‘Advances in banned.’(16) of the rich.’(17)
technology: innovative
military technology but ill-advised.’(19)
make the world a safer
and more peaceful How worried should we be that recent advances
place to live in.’(16) in science and technology are creating new
challenges and worsening old problems?(17) ‘To fear
genetic engineering is
to reject the solutions
‘We should fear, rather than celebrate, the latest Why worry about what it offers to mankind.’(19)
advances in science and technology.’(17) technological
advancement
may do to us when ‘The value of the
‘Technology is
we can just enjoy what humanities in a
‘While science requires advancing too rapidly it can do for us?(19) technology-driven
a leap of faith, for our own good.’(18)
world.’(19)
the arts demand
a leap of the ‘Mankind’s technological innovations
imagination.’(18) say little about his intelligence, but HCI
(abridged questions)
speak volumes about his laziness.’(18)

Issues s[1-6] emerging technologies, disruption s[7-9] social credit s[10-15] influence,
misinformation, disinformation s[16] internet minute s[17-19] deepfakes s[20] parents s[21] youth
s[22] education s[23] gaming s[24] memes s[25] illicit trade s[26-27] cyberattacks, cyberwar
s[28-29] bionics s[30-39] gene editing, gene therapy, biomedical technology, biopharmaceuticals,
sociogenomics, ethics s[40-42] regeneration, xenotransplantation, bioprinting s[43-46] agriculture,
food, waste, climate s[47-50] surveillance s[51-57] smart cities s[58-62] automation, AI, gig economy
s[63] terrorism s[64] what science needs
Note: Some articles have not been presented in their entirety. Ellipses denote omissions.
[1] Gartner highlights 29
emerging technologies with
significant impact on business,
society and people over the
next five to 10 years.

[2] Gartner also identifies top


10 strategic technology trends
for 2020:
1. Hyperautomation
2. Multiexperience
3. Democratisation
4. Human augmentation
5. Transparency and traceability
6. Empowered edge
7. Distributed cloud
8. Autonomous things
9. Practical blockchain
10. AI security
EXPLAIN THREE OF THE TRENDS
AND THEIR IMPACT.

www.gartner.com/smarterwithgartner/5-trends-appear-on-the-gartner-hype-cycle-for-emerging-technologies-2019/
www.gartner.com/smarterwithgartner/5-trends-emerge-in-gartner-hype-cycle-for-emerging-technologies-2018/

[3] What is disruptive innovation?


Rosamond Hutt. World Economic Forum. Jun 25, 16
https://www.weforum.org/agenda/2016/06/what-is-disruptive-innovation/

2
[4] The MIT Technology Review invited Bill Gates curate its 2019 list of 10 Breakthrough Technologies.
Gates’s selection centres on innovation that that improves life. He observes that because we are living
longer, we are starting to work on well-being even as we continue to strive towards longer life spans. Such
endeavours are not mutually exclusive. Gates opines that if mankind can overcome challenges such as
disease and climate change, innovation will be free to focus on more metaphysical concerns.
CONSIDER WHY THESE INNOVATIONS ARE SAID TO BE “BREAKTHROUGH TECHNOLOGIES”. WHO WILL BENEFIT AND HOW –
MATERIALLY OR INTANGIBLY? (Note that many of the developments are in nascent stages.)
1. Robot Dexterity: While a robot can’t yet be programmed to figure out how to grasp any object just by
looking at it, as people do, it can now learn to manipulate the object on its own through virtual trial and
error … We’ll need further breakthroughs for robots to master the advanced dexterity needed in a real
warehouse or factory. But if researchers can reliably employ [reinforcement] learning, robots might
eventually assemble our gadgets, load our dishwashers, and even help Grandma out of bed.
2. New-wave nuclear power: Advanced fusion and fission reactors are edging closer to reality. New
nuclear designs that have gained momentum in the past year are promising to make this power source
safer and cheaper. Among them are generation IV fission reactors, an evolution of traditional designs;
small modular reactors; and fusion reactors, a technology that has seemed eternally just out of reach …
Many consider fusion a pipe dream, but because the reactors can’t melt down and don’t create long-lived,
high-level waste, it should face much less public resistance than conventional nuclear … [but] no one
expects delivery before 2030.
3. Predicting preemies: Stephen Quake, a bioengineer … has found a way to use that to tackle one of
medicine’s most intractable problems: the roughly one in 10 babies born prematurely. Our genetic
material lives mostly inside our cells. But small amounts of “cell-free” DNA and RNA also float in our blood,
often released by dying cells. In pregnant women, that cell-free material is an alphabet soup of nucleic
acids from the foetus, the placenta, and the mother … it’s now easier to detect and sequence the small
amounts of cell-free genetic material in the blood … By sequencing the free-floating RNA in the mother’s
blood, Quake can spot fluctuations in the expression of seven genes that he singles out as associated
with preterm birth. That lets him identify women likely to deliver too early. Once alerted, doctors can take
measures to stave off an early birth and give the child a better chance of survival.
4. Gut-probe in a pill: Marked by inflamed intestines that are leaky and absorb nutrients poorly,
Environmental enteric dysfunction (EED) is widespread in poor countries and is one reason why many
people there are malnourished, have developmental delays, and never reach a normal height. No one
knows exactly what causes EED and how it could be prevented or treated. Practical screening to detect
it would help medical workers know when to intervene and how. Therapies are already available for infants
[requires an endoscopy which is] expensive, uncomfortable, and not practical in areas of the world
where EED is prevalent. So Guillermo Tearney, a pathologist and engineer … is developing small devices
that can be used to inspect the gut for signs of EED and even obtain tissue biopsies … they are simple
to use at a primary care visit.
5. Custom cancer vaccines: Scientists are on the cusp of commercialising the first personalised cancer
vaccine. If it works as hoped, the vaccine, which triggers a person’s immune system to identify a tumour
by its unique mutations, could effectively shut down many types of cancers. By using the body’s natural
defences to selectively destroy only tumour cells, the vaccine, unlike conventional chemotherapies, limits
damage to healthy cells. The attacking immune cells could also be vigilant in spotting any stray cancer
cells after the initial treatment. The possibility of such vaccines began to take shape in 2008, five years
after the Human Genome Project was completed, when geneticists published the first sequence of a
cancerous tumour cell.
6. Cow-free burger: The UN expects the world to have 9.8 billion people by 2050, and those people are
getting richer. Neither trend bodes well for climate change—especially because as people escape
poverty, they tend to eat more meat. By that date, according to the predictions, humans will consume 70
per cent more meat than they did in 2005 … raising animals for human consumption is among the worst
things we do to the environment. Depending on the animal, producing a pound of meat protein with
Western industrialized methods requires 4 to 25 times more water, 6 to 17 times more land, and 6 to 20
times more fossil fuels than producing a pound of plant protein … lab-grown and plant-based alternatives
might be the best way to limit the destruction … involves extracting muscle tissue from animals and
growing it in bioreactors … researchers are still working on the taste … One drawback … is that the
environmental benefits are still sketchy at best.
7. Carbon dioxide catcher: Even if we slow carbon dioxide emissions, the warming effect of the greenhouse
gas can persist for thousands of years. To prevent a dangerous rise in temperatures, the UN’s climate
panel now concludes, the world will need to remove as much as 1 trillion tons of carbon dioxide from the
atmosphere this century … Harvard climate scientist David Keith calculated that machines could, in theory,
pull this off for less than $100 a ton, through an approach known as direct air capture … [While the
captured CO2 can be used in products such as synthetic fuel and soda drinks,] the ultimate goal is to

3
lock greenhouse gases away forever. Some could be nested within products like carbon fibre, polymers,
or concrete, but far more will simply need to be buried underground, a costly job … pulling CO2 out of
the air is, from an engineering perspective, one of the most difficult and expensive ways of dealing with
climate change. But given how slowly we’re reducing emissions, there are no good options left.
8. An ECG on your wrist: ECG-enabled smart watches, made possible by new regulations and innovations
in hardware and software, offer the convenience of a wearable device with something closer to the
precision of a medical one … Current wearables still employ only a single sensor, whereas a real ECG
has 12. And no wearable can yet detect a heart attack as it’s happening. But this might change soon. Last
fall, AliveCor presented preliminary results to the American Heart Association on an app and two-sensor
system that can detect a certain type of heart attack.
9. Sanitation without sewers: About 2.3 billion people don’t have good sanitation. The lack of proper
toilets encourages people to dump faecal matter into nearby ponds and streams, spreading bacteria,
viruses, and parasites that can cause diarrhoea and cholera. Diarrhoea causes one in nine child deaths
worldwide. Now researchers are working to build a new kind of toilet that’s cheap enough for the
developing world and can not only dispose of waste but treat it as well. Most of the prototypes are self-
contained and don’t need sewers … One drawback is that [they] don’t work at every scale … the
challenge now is to make these toilets cheaper and more adaptable to communities of different sizes.
10. Smooth-talking AI assistants: AI assistants were supposed to have simplified our lives, but … recognise
only a narrow range of directives and are easily tripped up by deviations. New techniques that capture
semantic relationships between words are making machines better at understanding natural language
… letting us move from giving AI assistants simple commands to having conversations with them. They’ll
be able to deal with daily minutiae … [become] better at figuring out what you want … [but] still can’t
understand a sentence … reflecting how hard it is to imbue machines with true language understanding.

WHICH OF THESE CHALLENGES SHOULD BE A PRIORITY? OR IS THERE SOMETHING ELSE EVEN MORE CRITICAL?
From [5]: A. Carbon sequestration B. Grid-scale energy storage C. Universal flu vaccine
D. Dementia treatment E. Ocean clean-up F. Energy-efficient desalination G. Safe driverless car
H. Embodied AI I. Earthquake prediction K. Brain decoding

[4] 10 Breakthrough [5] 10 Breakthrough [6] Ten big global challenges


Technologies 2019. Technologies 2018. technology could solve.
MIT Technology MIT Technology MIT Technology Review. Feb 27, 19
Review. Review.

www.technologyreview.com/lists/technologies/2019/
www.technologyreview.com/lists/technologies/2018/
www.technologyreview.com/s/612951/ten-big-global-challenges-technology-could-solve/

[7-9] Governments are harnessing technology to solve problems and support aspirations. The Chinese
government is on the cusp of rolling out its social credit system nationwide. Every citizen will have a social
credit score, desirable behaviour will be rewarded, and deter bad habits penalised with inconveniences and
sanctions. There will be a publicly accessible “red list” of model citizens as well as a black list.
A string of misdeeds has called into question the state of morality in China. This includes Changsheng Bio-
technology Co Ltd selling substandard vaccines in 2018 and 2019, Erzubao’s involvement in a Ponzi scheme
in 2016, and the Sanlu melamine-tainted milk powder debacle that killed six babies in 2008. About 80 per cent
of 2,200 Chinese netizens surveyed in 2018 by Freie Universität Berlin – especially older, more educated, and
higher-income respondents – approved of social credit systems. But there are those who worry about differing
interpretations creating confusion and unfairness and an eventual descent into totalitarian control. WEIGH
THE DESIRED GAINS AGAINST THE POTENTIAL LOSSES.

[7] China's Social [8] Civilising China. A contentious [9] The Game of Life.
Credit Lab. social credit system moves boldly Visual Capitalist. Sep
Channel Newsasia. Mar forward. 18, 19
25, 19 Channel Newsasia. Apr 8, 19

www.channelnewsasia.com/news/video-on-demand/get-real/china-s-social-credit-lab-11370940
www.channelnewsasia.com/news/cnainsider/civilising-china-contentious-social-credit-system-moves-boldly-11419272
www.visualcapitalist.com/the-game-of-life-visualizing-chinas-social-credit-system/

IN YOUR OPINION, DOES YOUR COUNTRY NEED A SOCIAL CREDIT SYSTEM TO SHAPE BEHAVIOUR?

4
[10] It’s not easy to spot disinformation on Twitter. Here’s what we learned from 8
political ‘astroturfing’ campaigns (extract)
Franziska Keller, David Schoch, Sebastian Stier and JungHwan Yang. The Washington Post. Oct 28, 19
www.washingtonpost.com/politics/2019/10/28/its-not-easy-spot-disinformation-twitter-heres-what-we-learned-political-astroturfing-campaigns/

Facebook recently announced it had dismantled a number of alleged disinformation campaigns, including
Russian troll accounts targeting Democratic presidential candidates. Over the summer, Twitter and Facebook
suspended thousands of accounts they alleged to be spreading Chinese disinformation about Hong Kong
protesters. Disinformation campaigns based in Egypt and the United Arab Emirates used fake accounts on
several platforms this year to support authoritarian regimes across nearly a dozen Middle East and African
countries. And the Mueller report describes in detail how Russian trolls impersonated right-wing agitators and
Black Lives Matter activists to sow discord in the 2016 U.S. presidential election.
How can you distinguish real netizens from participants in a hidden influence campaign on Twitter? It’s not
easy. We examined eight hidden propaganda campaigns worldwide, comprising over 20,000 individual
accounts. We looked at Russia’s interference in the 2016 presidential election, and the South Korean secret
service’s attempt to influence that country’s 2012 presidential election. We looked at further examples
associated with Russia, China, Venezuela, Catalonia and Iran. All of these were “astroturfing” campaigns —
the goal is to mislead the public, giving a false impression that there is genuine grass-roots support or
opposition for a particular group or policy. We found that these disinformation campaigns don’t solely rely on
automated “bots” or bot accounts — contrary to popular media stories. Only a small fraction of the 20,000
accounts we reviewed (between 0 and 18 per cent, depending on the campaign) are “bot accounts” that posted
more than 50 tweets per day on a regular basis — a threshold some researchers use to distinguish automated
accounts from bona fide individual users. This isn’t a big surprise. Insiders have long reported that humans,
not programmers or AI, are behind these “troll farms.

[11-12] According to Channel Newsasia, during the Indonesian presidential elections in 2019, online campaign
strategists operated hundreds of personalised social media accounts to support Joko Widodo’s and
Prabowo Subianto’s bids without being directly linked to them. While some steered clear of fake news,
others were not as conscientious about the accuracy of their content. Representatives for Twitter, Facebook
and Whatsapp said they regularly deleted fake accounts in Indonesia, but declined to share removal numbers.

The New York Times asked its readers for examples of election-related misinformation they saw online and
collected more than 4,000 examples. It offered a review of some of the major types (“We Asked for Examples of
Election Misinformation. You Delivered.” Nov 8, 2018):
§ ‘Hoax floods’ after major news events § Russian Reddit manipulation § Sketchy text messages
§ Poorly labelled campaign ads § Deceptive claims about candidates § Committee-sponsored ‘attack pages’

[13] According to The Guardian, ‘the 2016 presidential race was widely regarded as a wake-up call to the
spectre of foreign influence, following what the US government concluded was a “systematic” Russian
campaign to undermine its democratic process. But under the Trump presidency, fake news and
misinformation has also grown into a new front in US political warfare. The result, according to the study by
the Pew Research Centre, is that almost 70 per cent of Americans feel fake news and misinformation have
greatly affected their confidence in government institutions, and experts warn of a deepening crisis if the
status quo is left unchecked.’

[14] There are fears that disinformation and misinformation will mar the upcoming US elections. ARE THERE
SIMILAR CONCERNS IN YOUR COUNTRY? IF SO, HOW ARE THEY BEING DEALT WITH?

[11] The New Cyber Army. [12] In Indonesia, Facebook and [13] Half of Americans see fake [14] Russia May Not Be the
Channel Newsasia. Apr 1, 19 Twitter are 'buzzer' battlegrounds news as bigger threat than Only Threat to U.S. Elections in
as elections loom. terrorism, study finds. 2020.
Channel Newsasia. Mar 13, 19 The Guardian. Jun 7, 19 Time. Oct 30, 19
www.channelnewsasia.com/news/video-on-demand/get-real/the-new-cyber-army-11408940
www.channelnewsasia.com/news/technology/in-indonesia--facebook-and-twitter-are--buzzer--battlegrounds-as-elections-loom-11338744
www.theguardian.com/us-news/2019/jun/06/fake-news-how-misinformation-became-the-new-front-in-us-political-warfare
time.com/5713739/us-election-threats-2020/

5
DO YOU SHARE THE WRITER’S CONCERNS?

[15] Foreign influence ops are a reality, but Singaporeans shouldn't overreact
Muhammad Faizal Abdul Rahman. The Straits Times. Jul 25, 19
www.straitstimes.com/opinion/foreign-influence-ops-are-a-reality-but-singaporeans-shouldnt-overreact

The threat of foreign influence infiltrating Singapore came into the limelight following the publication of the
report "A Preliminary Survey of CCP Influence Operations in Singapore" by the Jamestown Foundation, which
is a Washington-based research institute. The report finds that the Chinese Communist Party (CCP) exploits
cultural and business associations, media, and all people of Chinese ethnicity in Singapore as proxies to
influence Singapore's society and politics. This expose came five months after the Singapore Government
highlighted the need to bolster the nation's defences against foreign influence. Other nations are doing the
same. For example, Australia passed the Foreign Influence Transparency Scheme Act last December. The
United States has the Foreign Agents Registration Act (Fara), which was enacted in 1938 to counter pro-Nazi
influence in the lead-up to World War II; recently, Washington has stepped up Fara enforcement.
As foreign influence can affect ordinary people outside the political and policymaking echelons, it is important
to uncover the mystique that surrounds this threat. Singaporeans constitute an important line of defence as
foreign influence targets a nation through its society's hearts and minds. Historically, foreign influence is not
a new threat to Singapore and the region, but it takes new forms to suit the prevailing socio-economic
landscape and geopolitical winds of change. For example, the past interference of colonial powers in the internal
affairs of East Asian polities to gain control over trade and territory was an early form of this threat. During the
Cold War, the newspaper Nanyang Siang Pau in 1971 was found to be a proxy for the spread of pro-communist
propaganda in Singapore. In 1988, a US diplomat here was found to be cultivating Singaporeans "as proxies to
influence domestic politics".
Foreign influence peaks during periods of heightened rivalries between nations with competing geopolitical
goals. It is a tool of diplomacy that functions in coordination with other tools of power - information, military
and economy - used by any state actor to shape an international environment favourable to its interests. For
China, foreign influence perhaps lives by Sun Tzu's famous dictum "to subdue the enemy without fighting is the
acme of skill". Foreign influence becomes a serious problem when it seeks to divide the affected nation's
society and subvert its institutions as a means to an end. The problem worsens when foreign influence
cultivates a fifth column - individuals who knowingly or unknowingly aid a foreign power to interfere in their
nation's affairs from the inside. Following the recent expose, China refuted the report that it undertakes influence
operations in Singapore to subliminally pressure the nation to pick sides in the China-US geopolitical rivalry.
This response highlights a key feature of foreign influence operations by any state actor: Such operations
happen under a cloak of ambiguity in which denial and deniability are central to its strategy and methods.
While Singapore must guard against foreign influence, its society must not overreact to the extent of
succumbing to prejudices and casting suspicion on Singaporeans and foreign residents on the basis of their
race, religion, age, and/or nationality. An overreaction to the threat can potentially harm social cohesion by
opening the Pandora's box of racism, xenophobia and ageism. This is a problem that some Australians and
Americans of Chinese ethnicity are facing. As a multicultural nation, Singapore must never get to the point
where some of its citizens are asked or made to wonder if they are Singaporean enough. As a cosmopolitan
society and global city, Singapore must not allow anti-foreigner sentiments to become entrenched or close itself
to the global market that was key to the nation's transition from Third World to First. As a greying society that is
making significant moves to help older Singaporeans age gracefully and stay active, Singapore must not
overreact to the Jamestown Report's generalisation that older Chinese Singaporeans "have a stronger affinity
for China" and are more susceptible to Chinese nationalism. Regardless of race, older-generation Singaporeans
- the Merdeka Generation and Pioneer Generation - have given their collective blood, sweat and tears to build
the nation. In general, the older generations exemplify the value that "We are Singaporeans first".
National defence and social cohesion are strengths that were cultivated in Singapore's nation-building
experience. Nevertheless, the nation must be cognisant that hostile foreign influence aims to chip away at these
strengths so that Singaporeans are not psychologically and emotionally free from external forces, although the
homeland is territorially sovereign. Singapore was never spared and would never be spared from machinations
arising from geopolitical rivalries. That is the hard truth. Although Singapore currently has the Political Donations
Act and Internal Security Act to deter foreign interference, new legislative tools may be necessary to address
new forms of hostile foreign influence that, by nature, are duplicitous and elusive. These tools would function in
tandem with other security measures to keep Singapore safe in an increasingly troubled geopolitical
environment, which Singaporeans cannot be blissfully unaware of. In the current landscape, where conflict is
waged against society besides the state, Singaporeans must take the lessons in social defence and
psychological defence - Total Defence and National Education - at school and throughout adulthood more
seriously.
As ideas, emotions and culture are the currency of conflict, social cohesion is as important as military skills and
hardware. As Singapore strives to stay relevant to the world, the nation must stay united to remain free and
strong.

6
[16] Pulling public records and claims from major internet companies, media consultants Lori Lewis and Chadd
Callahan offer annual charts of what happens on average every minute of the day on the Internet. DISCUSS
WHY SOME INTERNET-BASED APPLICATIONS HAVE BECOME UBIQUITOUS. CONSIDER HOW PEOPLE’S BELIEFS AND HABITS
ARE BEING SHAPED BY THEIR RELIANCE.

www.visualcapitalist.com/what-happens-in-an-internet-minute-in-2019/
www.visualcapitalist.com/internet-minute-2018/
www.visualcapitalist.com/what-happens-internet-minute-2016/

[17-19] The technology that enables detailed editing of digital content can also be used to create “deepfakes”.
WHY IS SUCH CONTENT BEING CREATED – “JUST FOR FUN” OR IN SUPPORT OF DARKER MOTIVES?

[17] What ‘deepfakes’ are [18] Could deepfakes [19] Women in public life
and how they may be weaken democracy? are increasingly subject to
dangerous. The Economist. Oct 22, sexual slander. Don’t
CNBC. Oct 13, 19 19 believe it.
The Economist. Nov 9, 19
www.cnbc.com/2019/10/14/what-is-deepfake-and-how-it-might-be-dangerous.html
www.youtube.com/watch?v=_m2dRDQEC1A
www.economist.com/leaders/2019/11/09/women-in-public-life-are-increasingly-subject-to-sexual-slander-dont-believe-it

7
HOW WOULD YOU PARENT YOUR FUTURE “DIGITAL NATIVE”?

[20] Co-Parenting With Alexa (extracts)


Rachel Botsman. The New York Times. Oct 7, 2017
www.nytimes.com/2017/10/07/opinion/sunday/children-alexa-echo-robots.html

“You are going to have a chance to play with Alexa,” I told my daughter, Grace, who’s 3 years old. Pointing at
the black cylindrical device, I explained that the speaker, also known as the Amazon Echo, was a bit like Siri
but smarter. “You can ask it anything you want,” I said nonchalantly.
Grace leaned forward toward the speaker. “Hello, Alexa, my name is Gracie,” she said. “Will it rain today?” The
turquoise rim glowed into life. “Currently, it is 60 degrees,” a perky female voice answered, assuring her it
wouldn’t rain. Over the next hour, Grace figured out she could ask Alexa to play her favourite music from the
film “Sing.” She realized Alexa could tell jokes, do math or provide interesting facts. “Hey, Alexa, what do brown
horses eat?” And she soon discovered a whole new level of power. “Alexa, shut up,” she barked, then looked a
little sheepish and asked me if it was O.K. to be rude to her. So she thought the speaker had feelings?
By the next morning, Alexa was the first “person” Grace said hello to as she bounded into the kitchen wearing
her pink fluffy dressing gown. My pre-schooler who can’t yet ride a bike or read a book had also quickly mastered
that she could buy things with the bot’s help, or at least try to. “Alexa, buy me blueberries,” she commanded.
Grace, of course, had no idea that Amazon, the world’s biggest retailer, was the corporate behemoth behind
the helpful female assistant, and that smoothing the way when it came to impulse buys was right up Alexa’s
algorithmic alley.
Grace’s easy embrace of Alexa was slightly amusing but also alarming. My small experiment, with my daughter
as the guinea pig, drove home to me the profound shift in our relationship with technology. For generations,
our trust in it has gone no further than feeling confident the machine or mechanism will do what it’s supposed
or expected to do, nothing more, nothing less. We trust a washing machine to clean our clothes or an A.T.M.
to dispense money, but we don’t expect to form a relationship with them or call them by name. Today, we’re no
longer trusting machines just to do something, but to decide what to do and when to do it. The next generation
will grow up in an age where it’s normal to be surrounded by autonomous agents, with or without cute names.
The Alexas of the world will make a raft of decisions for my kids and others like them as they proceed through
life … In time, the question for them won’t be, “Should we trust robots?” but “Do we trust them too much?”
With some trepidation, I watched my daughter gaily hand her decisions over. “Alexa, what should I do today?”
Grace asked in her singsong voice on Day 3. It wasn’t long before she was trusting her with the big choices.
“Alexa, what should I wear today? My pink or my sparkly dress?”
[Amazon’s Echo Look is an] Alexa add-on that features a hands-free selfie camera controlled by your voice.
The device doesn’t just hear you, it sees you. According to Amazon, the Style Check feature uses “machine-
learning algorithms with advice from fashion specialists” to judge different outfits, awarding them an overall
rating to decide which is “better” based on “current trends and what flatters you.” The images it takes of you
happen to be stored in the Amazon Web Services cloud until you delete them. And while the fashion-savvy
assistant helps you decide what to wear, it has an ulterior motive: to sell you clothing, including choices from
one of Amazon’s own apparel lines, such as Lark & Ro and North Eleven, started in 2016.
It’s these kinds of intersections – like this small collision between robot “helpfulness” and a latent commercial
agenda — that can make parents like me start to wonder about the ethical niceties of this brave new bot world.
Alexa, after all, is not “Alexa.” She’s a corporate algorithm in a black box. [Moreover,] Grace doesn’t like it when
I tell her what to wear. How would she feel about Alexa judging her? Would she see it as helpful or crushing?
This could well be one of our parenting tasks in the near future — preparing our children for the psychological
repercussions of such personal interactions with computer “people.”
Still, the next generation is likely to feel very differently about machines than we do. In a study conducted by
M.I.T. Media Lab, 27 children, aged between 3 and 10, interacted with A.I. devices and toys … The researchers
asked the children how they felt about the devices in terms of intelligence, personality and trust. The younger
children seemed to see the agents as real people and asked them personal questions … Some thought the
device had multiple personalities … Almost 80 per cent of the children thought Alexa would always tell the truth.
Some of the children believed they could teach the devices something useful, like how to make a paper plane,
suggesting they felt a genuine, give-and-take relationship with the machines.
How do we teach our children to question not only the security and privacy implications but also the ethical
and commercial intentions of a device designed by marketers? Our kids are going to need to know where and
when it is appropriate to put their trust in computer code alone. I watched Grace hand over her trust to Alexa
quickly. There are few checks and balances to deter children from doing just that, not to mention very few tools
to help them make informed decisions about A.I. advice. And isn’t helping Gracie learn how to make decisions
about what to wear — and many more even important things in life — my job?
I decided to retire Alexa to the closet.

8
WHAT IS YOUR RELATIONSHIP WITH TECHNOLOGY, ESPECIALLY SOCIAL MEDIA?

[21] What do adults not know about my generation and technology?


Taylor Fang. MIT Technology Review. Dec 21, 19
https://www.technologyreview.com/s/614897/youth-essay-contest-adults-dont-understand-kid-technology/

“What do adults not know about my generation and technology?” MIT Technology Review posed this question in an essay contest open to
anyone 18 or younger. We received 376 submissions from young people in 28 different countries. Many were angry; some were despondent.
We think the winning essay, by Taylor Fang, presents a nuanced and moving view of how technology can be harnessed in the service of a
richly realized life. We hope you agree.
Screen. To conceal, protect, shelter. The word signifies invisibility. I hid behind the screen. No one could see
through the screen. The screen conceals itself: sensors and sheet glass and a faint glow at the edges; light,
bluer than a summer day. The screen also conceals those who use it. Our phones are like extensions of our
bodies, always tempting us. Algorithms spoon-feed us pictures. We tap. We scroll. We click. We ingest. We
follow. We update. We gather at traditional community hangouts only to sit at the margins, browsing Instagram.
We can’t enjoy a sunset without posting the view on Snapchat. Don’t even mention no-phone policies at dinner.
Generation Z is entitled, depressed, aimless, addicted, and apathetic. Or at least that’s what adults say about
us. But teens don’t use social media just for the social connections and networks. It goes deeper. Social-media
platforms are among our only chances to create and shape our sense of self. Social media makes us feel seen.
In our Instagram “biographies,” we curate a line of emojis that feature our passions: skiing, art, debate, racing.
We post our greatest achievements and celebrations. We create fake “finsta” accounts to share our daily
moments and vulnerabilities with close friends. We find our niche communities of YouTubers.
It’s true that social media’s constant stream of idealized images takes its toll: on our mental health, our self-
image, and our social lives. After all, our relationships to technology are multidimensional—they validate us just
as much as they make us feel insecure. But if adults are worried about social media, they should start by
including teenagers in conversations about technology. They should listen to teenagers’ ideas and visions for
positive changes in the digital space. They should point to alternative ways for teenagers to express their voices.
I’ve seen this from my own experience. When I got my first social-media account in middle school, about a year
later than many of my classmates, I was primarily looking to fit in. Yet I soon discovered the sugar rush of likes
and comments on my pictures. My life mattered! My captions mattered! My filters! My stories! My followers! I
was looking not only for validation, but also for a way to represent myself. Who do I want to be seen as? On the
internet I wasn’t screaming into the void—for the first time, I felt acutely visible. Yet by high school, this cycle of
presenting polished versions of myself grew tiring. I was tired of feeling like I was missing out. I was tired of
adhering to hypervisible social codes and tokens. By 10th grade, I was using social media only sporadically.
Many of my friends were going through the same shifts and changes in their ideas about social media.
For me, the largest reason was that I had found another path of self-representation: creative writing. I began
writing poetry, following poets on Twitter (with poems replacing pictures and news in my feed), and spending
the majority of my free time scribbling in a journal outdoors. I didn’t feel I needed Facebook as much. If I did
use social media, it was more for entertaining memes. This isn’t to say that every teenager should begin creating
art. Or that art would solve all of social media’s problems. But approaching technology through a creative lens
is more effective than merely “raising awareness.” Rather than reducing teenagers to statistics, we should make
sure teenagers have the chance to tell their own experiences in creative ways.
Take the example of “selfies.” Selfies, as many adults see them, are nothing more than narcissistic pictures to
be broadcast to the world at large. But even the selfie representing a mere “I was here” has an element of truth.
Just as Frida Kahlo painted self-portraits, our selfies construct a small part of who we are. Our selfies, even as
they are one-dimensional, are important to us. At this critical moment in teenagers’ and children’s lives, we all
need to feel less alone and to feel as if we matter. Teenagers are disparaged for not being “present.” Yet we
find visibility in technology. Our selfies aren’t just pictures; they represent our ideas of self. Only through
“reimagining” the selfie as a meaningful mode of self-representation can adults understand how and why
teenagers use social media. To “reimagine” is the first step toward beginning to listen to teenagers’ voices.
Meaning—scary as it sounds—we have to start actually listening to the scruffy video-game-hoarding teenage
boys stuck in their basements. Because our search for creative self isn’t so different from previous generations’.
To grow up with technology, as my generation has, is to constantly question the self, to split into multiplicities,
to try to contain our own contradictions. In “Song of Myself,” Walt Whitman famously said that he contradicted
himself. The self, he said, is large, and contains multitudes. But what is contemporary technology if not a
mechanism for the containment of multitudes?
So don’t tell us technology has ruined our inner lives. Tell us to write a poem. Or make a sketch. Or sew fabric
together. Or talk about how social media helps us make sense of the world and those around us. Perhaps
social-media selfies aren’t the fullest representations of ourselves. But we’re trying to create an integrated
identity. We’re striving not only to be seen, but to see with our own eyes.
Taylor Fang is a senior at Logan High School in Logan, Utah.

9
WHAT DO YOU MAKE OF THE OBSERVATIONS AND FINDINGS IN THIS ARTICLE? (THE NEXT ONE TOO)
[22] How classroom technology is holding students back (extracts)
Natalie Wexler. MIT Technology Review. Dec 19, 19
www.technologyreview.com/s/614893/classroom-technology-holding-students-back-edtech-kids-education/

In a first grade classroom I visited a few years ago, most of the six-year-olds were using iPads or computers.
They were working independently on math problems supposedly geared to their ability, while the teacher
worked separately with a small group. I watched as one boy, whom I’ll call Kevin, stared at an iPad screen
that directed him to “combine 8 and 3.” A struggling reader (like almost all his classmates), he pressed the
“Listen” button. But he still didn’t try to provide an answer.
“Do you know what combine means?” I asked. Finding that he didn’t, I explained it meant “add.” Satisfied that
I’d put Kevin on the path to success, I moved on to observe other students—and found their iPads displaying
sentences like Round 119 to the nearest ten and Find the area of the following triangle in square units. If Kevin
didn’t understand combine, were other kids understanding words like round and area? Not to mention square
units?
Then I found a boy staring at a computer screen showing a number line with the question What number comes
before 84? He listened to the instructions and tried 85, then 86, then 87, getting error messages each time.
Thinking the problem was the size of the numbers, I asked him what number comes before four. “Five?” he
guessed. It dawned on me that he didn’t understand the word before. Once I explained it, he immediately
clicked on 83.
I returned to Kevin to see whether he had been able to combine 8 and 3. But I found he was drawing bright
pink lines on the iPad with his finger—one of the gizmo’s numerous distracting capabilities.
“Can you answer the question?” I asked.
“I don’t want to.” He sighed. “Can I play a game?”
The school that Kevin and his classmates attend, located in a poor neighbourhood in Washington, DC, prides
itself on its “one-to-one” policy—the increasingly popular practice of giving each child a digital device, in
this case an iPad. “As technology continues to transform and improve our world,” the school’s website says,
“we believe low-income students should not be left behind.”
Schools across the country have jumped on the education technology bandwagon in recent years, with the
encouragement of technophile philanthropists like Bill Gates and Mark Zuckerberg. As older education reform
strategies like school choice and attempts to improve teacher quality have failed to bear fruit, educators have
pinned their hopes on the idea that instructional software and online tutorials and games can help
narrow the massive test-score gap between students at the top and bottom of the socioeconomic scale. A
recent Gallup report found that 89% of students in the United States (from third to 12th grade) say they use
digital learning tools in school at least a few days a week.
Gallup also found near-universal enthusiasm for technology on the part of educators. Among
administrators and principals, 96% fully or somewhat support “the increased use of digital learning tools in
their school,” with almost as much support (85%) coming from teachers. But it’s not clear this fervour is based
in evidence. When asked if “there is a lot of information available about the effectiveness” of the digital tools
they used, only 18% of administrators said yes, along with about a quarter of teachers and principals. Another
quarter of teachers said they had little or no information.
In fact, the evidence is equivocal at best. Some studies have found positive effects, at least from moderate
amounts of computer use, especially in math. But much of the data shows a negative impact at a range of
grade levels. A study of millions of high school students in the 36 member countries of the Organisation for
Economic Co-operation and Development (OECD) found that those who used computers heavily at school
“do a lot worse in most learning outcomes, even after accounting for social background and student
demographics.” According to other studies, college students in the US who used laptops or digital devices in
their classes did worse on exams. Eighth graders who took Algebra I online did much worse than those who
took the course in person. And fourth graders who used tablets in all or almost all their classes had, on
average, reading scores 14 points lower than those who never used them—a differential equivalent to an
entire grade level. In some states, the gap was significantly larger.
A 2019 report from the National Education Policy Centre at the University of Colorado on personalized
learning—a loosely defined term that is largely synonymous with education technology—issued a sweeping
condemnation. It found “questionable educational assumptions embedded in influential programs, self-
interested advocacy by the technology industry, serious threats to student privacy, and a lack of research
support.”
Judging from the evidence, the most vulnerable students can be harmed the most by a heavy dose of
technology—or, at best, not helped. The OECD study found that “technology is of little help in bridging the
skills divide between advantaged and disadvantaged students.” In the United States, the test score gap
between students who use technology frequently and those who don’t is largest among students from low-

10
income families. A similar effect has been found for “flipped” courses, which have students watch lectures at
home via technology and use class time for discussion and problem-solving. A flipped college math class
resulted in short-term gains for white students, male students, and those who were already strong in math.
Others saw no benefit, with the result that performance gaps became wider.
Even more troubling, there’s evidence that vulnerable students are spending more time on digital devices
than their more privileged counterparts …
The dangers of relying on technology are also particularly pronounced in literacy education and at early
grade levels. Unfortunately, to judge from my observations of classrooms at high-poverty schools like the one
Kevin attends, that’s exactly how and when digital devices are commonly used. The bulk of the elementary
school day—three hours or more, at some schools—is spent on “reading” and the rest on math. Especially in
schools where standardized reading and math scores are low, subjects like social studies and science have
largely disappeared from the curriculum. And the standard class format is to have students rotate through
“centres,” working independently on reading and math skills while the teacher works with a small group. In the
classrooms I’ve been in, at least one of the centres always involves working on a digital device.
Why are these devices so unhelpful for learning? Various explanations have been offered …

[23] Video games are dividing South Korea (extracts)


Max S. Kim. MIT Technology Review. Dec 23, 19
www.technologyreview.com/s/614933/video-games-national-crisis-addiction-south-korea/

They say StarCraft was the game that changed everything. There had been other hits before, from Tetris and
Super Mario Bros to Diablo, but when the American entertainment company Blizzard released its real-time
science fiction strategy game in 1998, it wasn’t just a hit—it was an awakening.
Back then, South Korea was seen as more of a technological backwater than a major market. Blizzard hadn’t
even bothered to localize the game into Korean. Despite this, StarCraft—where players fight each other with
armies of warring galactic species—was a runaway success. Out of 11 million copies sold worldwide, 4.5
million were in South Korea. National media crowned it the “game of the people.”
The game was so popular that it triggered another boom: “PC bangs,” pay-as-you-go gaming cafés stocked
with food and drinks where users could entertain themselves for less than a dollar an hour. As old-world youth
haunts like billiard halls and comic-book stores disappeared, PC bangs took their place, feeding the growing
appetite for StarCraft. In 1998 there were just 100 PC bangs around the country; by 2001 that had multiplied
to 23,000. Economists dubbed the phenomenon “Starcnomics.”
“PC bangs were really the only place where people could relieve their stress,” says Edgar Choi, a former
teenage StarCraft wunderkind who went on to become one of the first professional gamers. Now 35, and still
involved in pro gaming, Choi says that StarCraft and PC bang culture spoke to a generation of young South
Koreans boxed in by economic anxiety and rising academic pressures. “Young people especially had few
other places they could go, especially since parents would just tell them to study if they were at home,” he
says.
The social aspect of StarCraft set the stage for another phenomenon: e-sports. PC bangs began hosting the
first StarCraft competitions—informal neighbourhood affairs where prizes were free playing time and bragging
rights. After one cartoon channel broadcast a tournament on TV to popular acclaim in 1999, organized
competitions took over. By 2004, one finals match held on Busan’s Gwangalli Beach attracted more than
100,000 spectators. Crowds like that drew money and fame. Corporate sponsorships flowed from companies
like Samsung, which created branded professional teams paying big salaries. Lim Yo-hwan, the Michael
Jordan of StarCraft, was a household name whose public profile surpassed that of pop artists and movie stars.
Choi, a self-described “midlevel player,” says even today he is occasionally recognized by taxi drivers who
used to watch him on TV.
Beyond gaming circles, however, an unease had begun to sink in. Just outside Seoul, at a hospital in the
nearby city of Uijeongbu, psychiatrist Lee Hae-kook witnessed StarCraft mania unfold. But his eyes weren’t
on its popularity. He was looking at a pattern of medical incidents involving computer games …
On May 25, 2019 … the World Health Organization [passed] the 11th revision of the International Classification
of Diseases … the addition of “gaming disorder,” defined as “a pattern of persistent or recurrent gaming
behaviour” accompanied by a loss of control and functional impairment. It is only the second globally
recognized behavioural addiction; the first was gambling … one of the central disagreements hanging over
the WHO’s decision: Is excessive gaming truly a unique disorder, or is it simply a manifestation of other
conditions? …

11
THINK ABOUT WHAT YOU HAVE READ IN [10-15]. [63] TOO.
[24] How memes got weaponized: A short history
Joan Donovan. MIT Technology Review. Oct 24, 19
www.technologyreview.com/s/614572/political-war-memes-disinformation/

In October 2016, a friend of mine learned that one of his wedding photos had made its way into a post on a
right-wing message board. The picture had been doctored to look like an ad for Hillary Clinton’s campaign,
and appeared to endorse the idea of drafting women into the military. A mutual friend of ours found the image
first and sent him a message: “Ummm, I saw this on Reddit, did you make this?”
This was the first my friend had heard of it. He hadn’t agreed to the use of his image, which was apparently
taken from his online wedding album. But he also felt there was nothing he could do to stop it. So rather than
poke the trolls by complaining, he ignored it and went on with his life.
Most of his friends had a laugh at the fake ad, but I saw a huge problem. As a researcher of media manipulation
and disinformation, I understood right away that my friend had become cannon fodder in a “meme war”—the
use of slogans, images, and video on social media for political purposes, often employing disinformation and
half-truths.
While today we tend to think of memes as funny images online, Richard Dawkins coined the term back in
1976 in his book The Selfish Gene, where he described how culture is transmitted through generations. In his
definition, memes are “units of culture” spread through the diffusion of ideas. Memes are particularly salient
online because the internet crystallizes them as artefacts of communication and accelerates their
distribution through subcultures. Importantly, as memes are shared they
shed the context of their creation, along with their authorship. Unmoored from
the trappings of an author’s reputation or intention, they become the collective
property of the culture. As such, memes take on a life of their own, and no
one has to answer for transgressive or hateful ideas.
And while a lot of people think of memes as harmless entertainment—funny,
snarky comments on current events—we’re far beyond that now. Meme wars
are a consistent feature of our politics, and they’re not just being used by
internet trolls or some bored kids in the basement, but by governments,
political candidates, and activists across the globe. Russia used memes and
other social-media tricks to influence the US election in 2016, using a troll
farm known as the Internet Research Agency to seed pro-Trump and anti-
Clinton content across various online platforms. Both sides in territorial
conflicts like those between Hong Kong and China, Gaza and Israel, and India
and Pakistan are using memes and viral propaganda to sway both local and
international sentiment.
In 2007, for example, as he was campaigning for president, John McCain
jokingly started to sing “Bomb bomb bomb, bomb bomb Iran” to the tune of
the Beach Boys’ popular song “Barbara Ann.” McCain, an Iran hawk, was
talking up a possible war using the well-worn tactic of humour and familiarity:
easy to dismiss as a joke, yet serving as a scary reminder of US military
power. But it became a political liability for him. The slogan was picked up by
civilian meme-makers, who spread and adapted it until it went viral. His
opponent, Barack Obama, in essence got unpaid support from people who
were better at creating persuasive content than his own campaign staff.
The viral success of memes has led governments to
try imitating the genre in their propaganda. These
campaigns are often aimed at the young, like the US
Army’s social-media-focused “Warriors Wanted”
program, or the British Army campaign that borrows
the visual language of century-old recruiting posters
to make fun of millennial stereotypes. These drew
ridicule when they were launched earlier this year, but
they did boost recruitment.
However, using memes this way misses the point entirely. As
mentioned, great memes are authorless. They move about the culture without
attribution. Much more authentic military meme campaigns are coming from
soldiers themselves, such as the memes referencing the bungling idiot known
simply as “Carl.” US service members and veterans run websites that host
jokes and images detailing the reality of military life. Yet these serve a
purpose not so different from that of official propaganda. They often feature
heavily armed soldiers and serve to highlight, even in jokes, the tremendous

12
destructive capacity of the armed forces. In turn, such memes have been turned
into commercial marketing campaigns, such as one for the veteran-owned clothing
company Valhalla Wear.
Recognizing this power of memes generated by ordinary people to serve a state’s
propaganda narrative, in 2005 a Marine Corps major named Michael Prosser wrote
a master’s thesis titled “Memetics—A Growth Industry in US Military Operations,”
in which he called for the formation of a meme warfare centre that would enrol
people to produce and share memes as a way of swaying public opinion.
Prosser’s idea didn’t come to fruition, but the US government did come to
recognize memetics as a threat. Beginning in 2011, the Defence
Advanced Research Projects Agency offered $42 million in grants for
research into what it called “social media in strategic communications,”
with the hope that the government could detect “purposeful or deceptive
messaging and misinformation” and create countermessaging to fight it.
Yet that research didn’t prepare DARPA for Russia’s 2016 disinformation
campaign. Its extent was uncovered only by reporters and academics.
That revealed a fatal flaw in national security: foreign agents are nearly
impossible to detect when they hide within the civilian population.
Unless social-media companies cooperate with the state to monitor
attacks, this tactic remains in play.
My friend’s wedding photo provides a good illustration of how something
as seemingly trivial as a meme can be turned into a powerful political
weapon. In 2016, a Reddit message board, r/The_Donald, was a well-
known meme factory for all things Trump. Imagery and sloganeering were
beta-tested and refined there before being deployed by swarms of
accounts on social-media platforms. Famous viral slogans launched from
The_Donald included those having to do with “Pizzagate” and the Seth
Rich murder conspiracy.
My friend’s picture was appropriated for a memetic warfare operation
called #DraftMyWife or #DraftOurDaughters, which aimed to falsely
associate Hillary Clinton with a revival of the draft. The strategy was
simple: the perpetrators took imagery from Clinton’s official digital
campaign materials, as well as pictures online like my friend’s, and altered
them to make it look as if Clinton would draft women into the military if she
became president. Someone who saw one of these fake campaign ads
and then searched online would find that Clinton had in fact spoken in June
2016 in support of a bill that included a provision making women eligible
to be drafted—but only in case of a national emergency. The bill was
passed, but it was later changed to remove that requirement. This is what
made #DraftMyWife sneaky—it was based on a kernel of truth.
Memes like this often use a process called “trading up the chain,” pioneered by media entrepreneur Ryan
Holiday, who describes the method in his book Trust Me, I’m Lying. Campaigns begin with posts in blogs or
other news outlets with low standards. If all goes well, somebody notable will inadvertently spread the
disinformation by tweet, which then leads to coverage in bigger and more reputable outlets. #DraftMyWife was
outed fairly early on as a hoax and got debunked in the Washington Post, the Guardian, and elsewhere. The
problem is, taking the trouble to correct disinformation campaigns like these can satisfy the goal of spreading
the meme as far as possible—a process called amplification.
Memes online make hoaxes and psychological operations easy to pull off on an international scale. We should
view them as a serious threat. The good news is that a bill in the works in the US Congress would form a
national commission to assess the threat posed by foreign and domestic actors manipulating social media
to cause harm. Just focusing on those actors misses the point, though, for much the same reason those
meme-inspired military recruiting campaigns missed the point. Memetic warfare works only if those waging it
can rely on massive public participation to spread the memes and obscure their original authors. So rather
than going after the meme creators, politicians and institutions looking to counter meme war might do better
to strengthen the institutions that create and distribute reliable information—the media, academia,
nonpartisan government agencies, and so on—while US cyber-defence works with the platform companies
to root out influence operations.
And if that doesn’t work, blame Carl.
Joan Donovan leads the Technology and Social Change Research Project in the Shorenstein Centre at the Harvard Kennedy School of Government.

13
WHAT CAN WE DO WHEN PROBLEMS CANNOT BE ERADICATED AND EVEN EVOLVE INTO BIGGER CHALLENGES?
[25] Dark Web Drug Sellers Dodge Police Crackdowns (extracts)
Nathaniel Popper. The New York Times. Jun 11, 19
www.nytimes.com/2019/06/11/technology/online-dark-web-drug-markets.html

Authorities in the United States and Europe recently staged a wide-ranging crackdown on online drug
markets, taking down Wall Street Market and Valhalla, two of the largest drug markets on the so-called dark
web. Yet the desire to score drugs from the comfort of home and to make money from selling those drugs
appears for many to be stronger than the fear of getting arrested.
Despite enforcement actions over the last six years that led to the shutdown of about half a dozen sites —
including the most recent two — there are still close to 30 illegal online markets, according to DarknetLive, a
news and information site for the dark web … That means the fight against online drug sales is starting to
resemble the war on drugs in the physical world: There are raids. Sites are taken down; a few people are
arrested. And after a while the trade and markets pop up somewhere else. “The instability has become sort of
baked into the dark-web market experience,” said Emily Wilson, an expert on the dark web at the security
firm Terbium Labs. “People don’t get quite as scared by it as they did the first few times.”
Dark web markets are viewed as one of the crucial sources of fentanyl and other synthetic opioids. These
drugs are often produced in China and sent to users found on the dark net. The packages flowing from China
are blamed for compounding the opioid crisis in the United States. On Empire, one of the largest markets
still online, people could have their pick of more than 26,000 drug and chemical listings, including over 2,000
opioids, shipped right to their mailbox.
Illicit online drug sales have grown in complexity and volume since the shutdown of Silk Road, the original
dark net market that came online in 2011 and initially offered only a small selection of psychedelic mushrooms.
When the authorities took down the Silk Road in 2013 and jailed its creator, Ross Ulbricht, there was a
widespread assumption that his failure and punishment would deter imitators. But the dealers who had been
selling the drugs on the market migrated to competing sites set up with a similar infrastructure, using the Tor
web browser, which hides the location of the websites and their viewers, and Bitcoin, which allows for
essentially anonymous payments. A few years later, in 2017, when the police took down two of the biggest
successors to Silk Road, AlphaBay and Hansa market, there was five times as much traffic happening on the
dark net as the Silk Road had at its peak, according to Chainalysis, a firm that analyses Bitcoin traffic …
Governments have dedicated increasingly substantial resources to fighting dark net markets, especially as
their role in the rise of synthetic opioids has become more clear. In early 2018, the F.B.I. created the Joint
Criminal Opioid Darknet Enforcement team, or J-Code, with more than a dozen special agents and staff.
Europol has its own dedicated dark web team. And authorities everywhere have broadened their focus beyond
just the administrators overseeing the markets. During the first few months of 2019, American officials
conducted an operation called SaboTor, which focused on the vendors selling drugs on the dark net. There
were 61 arrests in just a few weeks. One ring, in the Los Angeles area, was said to be responsible for shipping
1,500 packages of crack, heroin and methamphetamine each month …
But the impact of the law enforcement activity of the last two years could be temporary. Data from Chainalysis
suggests that before the latest crackdown, overall transactions on the dark net had recovered to nearly 70
percent of the previous peak, right before AlphaBay went down, and were growing each month. Mr. Downing,
the Justice Department lawyer, said that for now there was a recognition, even among the authorities, that
dark net markets have become an enduring part of the criminal economy …
In late May, the Dutch police shut down a site that offered what is known as Bitcoin laundering, a service that
made it harder for police to track Bitcoin transactions by mixing transactions together. Shortly before that,
American authorities took down a news website, known as DeepDotWeb, that lived on the traditional web,
providing reviews and links to dark net sites. The absence of the site is likely to make it harder for newcomers
to find their way to dark web markets. The surviving markets, and new ones that have already popped up,
have adopted measures to make them more difficult targets for the authorities. Most markets now allow for
payments in alternative cryptocurrencies, like Monero, which are designed to be harder to track. And
DeepDotWeb already has a formidable successor in the social network and news site Dread, which is available
only on the dark net. One of the new markets that have emerged recently, Cryptonia, has promised that it has
figured out many of the flaws that made previous sites vulnerable to the police … There is also some concern
that sellers will move away from big markets and offer their wares directly to customers on encrypted
messaging systems like Telegram, which are difficult for the police to monitor.

[26] FireEye Cyber Threat Map [27] The complete WIRED Guide to
www.fireeye.com/cyber-map/threat-map.html Cyberwar
Andy Greenberg. WIRED. Aug 23, 19
https://www.wired.com/story/cyberwar-guide/

14
ARE ADVANCES IN BIONICS A CAUSE FOR CELEBRATION OR WORRY?
[28] www.bbc.com/future/tags/bionics
[29] A brain-controlled exoskeleton has let a paralysed man walk in the lab à
MIT Technology Review. Oct 4, 19
www.technologyreview.com/f/614476/a-brain-controlled-exoskeleton-has-let-a-paralyzed-man-walk-in-the-lab/

A paralysed man has walked again thanks to a brain-controlled exoskeleton suit. Within the safety of a
lab setting, he was also able to control the suit’s arms and hands, using two sensors on his brain.
The patient was a man from Lyon named Thibault, who fell 40 feet (12 meters) from a balcony four years ago,
leaving him paralysed from the shoulders down. Thibault had surgery to place two implants, each containing
64 electrodes, on the parts of the brain that control movement. Software then translated the brain waves read
by these implants into instructions for movement. Thibault trained for months, using his brain signals to control
a video game avatar in order to hone the skills required to operate exoskeleton, which was held up by a ceiling-
mounted harness. He was able to walk slowly in the suit and then stop, as he chose.
The development of the exoskeleton, by Clinatec and the University of Grenoble, is described in a paper
in The Lancet this week. The hope is that one day similar technology could eventually let people in wheelchairs
move them using their minds. It’s an impressive breakthrough, but the device is many years away from being
publicly available. For example, researchers need to find a way to get the suit to safely balance itself before
it can be used outside the laboratory.

AS RESEARCHERS VENTURE INTO UNCHARTERED TERRITORY, WHO SHOULD HAVE A SAY OVER THEIR WORK?
[30] What is gene editing and how can it be used to rewrite the code of life? (extracts)
Ian Sample. The Guardian. Jan 15, 18
www.theguardian.com/science/2018/jan/15/gene-editing-and-what-it-really-means-to-rewrite-the-code-of-life

Genes are the biological templates the body uses to make the structural proteins and enzymes needed to
build and maintain tissues and organs. They are made up of strands of genetic code, denoted by the letters
G, C, T and A. Humans have about 20,000 genes bundled into 23 pairs of chromosomes all coiled up in the
nucleus of nearly every cell in the body. Only about 1.5% of our genetic code, or genome, is made up of
genes. Another 10% regulates them, ensuring that genes turn on and off in the right cells at the right time, for
example. The rest of our DNA is apparently useless. “The majority of our genome does nothing,” says Gerton
Lunter, a geneticist at the University of Oxford. “It’s simply evolutionary detritus.”
Scientists liken gene editing to the find and replace feature used to correct misspellings in documents written
on a computer. Instead of fixing words, gene editing rewrites DNA, the biological code that makes up the
instruction manuals of living organisms. With gene editing, researchers can disable target genes, correct
harmful mutations, and change the activity of specific genes in plants and animals, including humans.
Much of the excitement around gene editing is fuelled by its potential to treat or prevent human diseases.
There are thousands of genetic disorders that can be passed on from one generation to the next … They are
not rare: one in 25 children is born with a genetic disease … Gene editing holds the promise of treating these
disorders by rewriting the corrupt DNA in patients’ cells. But it can do far more than mend faulty genes. Gene
editing has already been used to modify people’s immune cells to fight cancer or be resistant to HIV infection.
It could also be used to fix defective genes in human embryos and so prevent babies from inheriting serious
diseases. This is controversial because the genetic changes would affect their sperm or egg cells, meaning
the genetic edits and any bad side effects could be passed on to future generations.
The agricultural industry has leapt on gene editing for a host of reasons. The procedure is faster, cheaper
and more precise than conventional genetic modification, but it also has the benefit of allowing producers to
improve crops without adding genes from other organisms – something that has fuelled the backlash
against GM crops in some regions … Other branches of medicine have also seized on its potential.
Companies working on next-generation antibiotics have developed otherwise harmless viruses that find and
attack specific strains of bacteria that cause dangerous infections. Meanwhile, researchers are using gene
editing to make pig organs safe to transplant into humans. Gene editing has transformed fundamental
research too, allowing scientists to understand precisely how specific genes operate.
There are many ways to edit genes, but the breakthrough behind the greatest achievements in recent years
is a molecular tool called Crispr-Cas9. It uses a guide molecule (the Crispr bit) to find a specific region in
an organism’s genetic code – a mutated gene, for example – which is then cut by an enzyme (Cas9). When
the cell tries to fix the damage, it often makes a hash of it, and effectively disables the gene. This in itself is
useful for turning off harmful genes. But other kinds of repairs are possible. For example, to mend a faulty
gene, scientists can cut the mutated DNA and replace it with a healthy strand that is injected alongside the
Crispr-Cas9 molecules. Different enzymes can be used instead of Cas9, such as Cpf1, which may help edit
DNA more effectively … One way is to pack the gene editing molecules into harmless viruses that infect
particular types of cell. Millions of these are then injected into the bloodstream or directly into affected tissues.

15
Once in the body, the viruses invade the target cells and release the gene editing molecules to do their work
… Researchers have [also] used fatty nanoparticles to carry Crispr-Cas9 molecules to the liver, and tiny zaps
of electricity to open pores in embryos through which gene editing molecules can enter …
Modern gene editing is quite precise but it is not perfect. The procedure can be a bit hit and miss, reaching
some cells but not others. Even when Crispr gets where it is needed, the edits can differ from cell to cell, for
example mending two copies of a mutated gene in one cell, but only one copy in another. For some genetic
diseases this may not matter, but it may if a single mutated gene causes the disorder. Another common
problem happens when edits are made at the wrong place in the genome. There can be hundreds of these
“off-target” edits that can be dangerous if they disrupt healthy genes or crucial regulatory DNA.
The overwhelming effort in medicine is aimed at mending faulty genes in children and adults. But a handful of
studies have shown it should be possible to fix dangerous mutations in embryos too. In 2017, scientists
convened by the US National Academy of Sciences and the National Academy of Medicine cautiously
endorsed gene editing in human embryos to prevent the most serious diseases, but only once shown to be
safe. Any edits made in embryos will affect all of the cells in the person and will be passed on to their children,
so it is crucial to avoid harmful mistakes and side effects.
Engineering human embryos also raises the uneasy prospect of designer babies, where embryos are altered
for social rather than medical reasons; to make a person taller or more intelligent, for example. Traits like
these can involve thousands of genes, most of them unknown … for the time being, designer babies are a
distant prospect.
What’s next: wRace to get gene editing therapies into the clinic. wBase editing wEngineered gene drives wEpigenome editing

[31] The Upside Of Bad Genes (extracts)


Moises Velasquez-Manoff. The New York Times. Jun 17, 17
www.nytimes.com/2017/06/17/opinion/sunday/crispr-upside-of-bad-genes.html

There’s a well-to-do couple thinking about having children. They order a battery of genetic tests to ensure that
there’s nothing untoward lurking in their genomes. And they discover that they each carry one copy of the
sickle cell gene. If their children inherit two copies of the gene, they could develop anaemia, which can cause
joint pain, weakness and even death. So what should the couple do? For the last two decades, they’ve had
the option of artificially fertilizing embryos and selecting only those that lack the sickle cell trait. Now a new
possibility is on the horizon: They may soon be able to edit the offending gene right out of their own sperm,
eggs or embryos, erasing it from their bloodline forever.
The technology that will allow this is called Crispr-Cas9 … The method is probably at least a few years away
from being applied in clinical settings. Still, some are already worried that, when it comes to improving our
own genome, we don’t yet know enough about how genes work to wield this power without unintended
consequences.
In 2015, the journal Science declared Crispr the “breakthrough of the year,” and researchers in China edited
human embryos for the first time. Scientists also convened a meeting in Napa, Calif., to talk through the ethical
implications of this new technology, and to draft guidelines. A fundamental issue that came up, says Jennifer
Doudna, a biochemist at the University of California, Berkeley, and a pioneer in Crispr research, was that
scientists really don’t understand enough about the upside of genes we consider “bad” to begin editing them
willy-nilly. Sickle cell was a case in point. The gene is usually found in people who live in, or whose ancestors
came from, sub-Saharan Africa, the Arab world and India; in those places, having one copy of the gene can
prevent the worst symptoms of malaria. Of every four children our imaginary couple might have, one will
probably be afflicted with sickle cell disease, but two would most likely be protected from malarial disease.
Ditto with the gene variants that cause the lung disease cystic fibrosis. In parts of northwest Europe, about 1
in 25 people carries a single copy of the gene. And while two copies cause disease, it has long been
hypothesized that having just one protects against tuberculosis — the White Plague that ravaged Europe for
a few hundred years.
Both these genes probably helped us survive in the past, so is it wise to remove them now? Earlier this year,
the National Academy of Sciences and the National Academy of Medicine issued recommendations on editing
embryos and other germ line cells, calling for a high degree of caution but not prohibition. An obvious
counterargument to the precautionary approach is that the world has changed. We in the developed world
don’t inhabit an environment rife with malaria and TB anymore. We have drugs to protect us when infection
strikes. So doomsday preppers notwithstanding, removing injurious and outdated genes is a logical step in
our (now self-directed) evolution. The problem, Rd. Doudna points out, is that new pathogens for which we
don’t have cures continue to emerge — like H.I.V. and SARS and drug-resistant variants of TB. In fact, as the
world has become more crowded and interconnected, the emergence of new pathogens has accelerated.
Those “bad” gene variants might still come in handy, she says. More broadly, genetically diverse populations
tend to be more resilient, precisely because they have more genetic resources to draw on when unforeseen

16
challenges arise.
To further complicate matters, some of the gene variants now linked with disease probably don’t cause as
many problems in other environments. Consider the border between Finland and Russia, where there’s a
sharp gradient in the prevalence of autoimmune disorders like celiac disease and Type 1 diabetes … these
conditions have become worrisomely common in Finland in recent decades, but are between one-fifth and
one-sixth as common on the Russian side, despite the fact that the Russians are just as genetically
predisposed to developing them … Finnish scientists think that exposure to a particular community of microbes
— one that more resembles the microbiota of our less hygienic past — prevents the diseases from emerging
in Russia. That’s important because at least some of the gene variants associated with autoimmune disease
are probably useful; they most likely helped us battle infections in the past.
So instead of rewriting our genetic code, a better approach might be to change the interplay between our
genes and environment — in this case by altering the microbes we encounter. This dynamic may also
apply to a gene linked with dementia. Carriers of the ApoE4 variant have an up-to-fourfold increased risk of
Alzheimer’s disease … strangely, the gene was linked to enhanced cognitive performance in children living in
Brazilian slums … here’s the point: A gene that we now think increases the risk of cognitive decline may
actually protect against it in other environments. Most biomedical research is done on modernized
populations in industrialized cities. But if we edit out genes based on how they work in those populations
exclusively, “we might disrupt processes that we didn’t realise were important,” …
Finally, there’s diet. Back in the late 1980s, the anthropologist Fatimah Jackson discovered that the
prevalence of the sickle cell trait varied considerably across Liberia, a small country. It was more common in
the northwest than the southeast, even though the infection that makes the trait advantageous — malaria —
was everywhere … She realized that regular consumption of cassava — more common in the Southeast than
the Northwest — could, by working as an antimalarial drug, affect the prevalence of the sickle cell trait, by
making it less advantageous … in the Northwest, … people who had two copies of the sickle cell gene still ate
enough [cassava] to partly avoid sickling … diet may have prevented a genetic disease from fully manifesting.

[32] Gene editing like Crispr is too important to be left to scientists alone
Natalie Kofler. The Guardian. Oct 22, 19
www.theguardian.com/commentisfree/2019/oct/22/gene-editing-crispr-scientists

Two little girls called Lulu and Nana celebrate their first birthday this month. The Chinese twins are the first
humans to have every cell in their body genetically modified using Crispr-Cas9, a revolutionary gene-editing
process that allows the DNA in embryos to be edited to carry certain characteristics that can be passed down
to their children and grandchildren.
When the twins’ birth was announced to the world by the US-trained biochemist He Jiankui, he described how
he and his Chinese and American colleagues had used Crispr to introduce genetic mutations into otherwise
healthy embryos in an attempt to minimise the girls’ susceptibility to HIV infection. Such an intervention was
both unnecessary and possibly ineffective, and in direct defiance of scientific consensus and established
ethical norms. As a molecular biologist who has spent over a decade in laboratories, I was horrified by the
experiment.
The stories of these girls’ own experiences remain to be told. I sincerely hope theirs is a life filled with joy,
playfulness and love, but it is likely to also feature health consequences.
In the months following, He was labelled a “rogue” scientist, and elements of the mainstream scientific
community scrambled to distance themselves. Government bodies rushed to assemble expert groups to
develop regulatory guidelines that could prevent similar actions from other “outliers”.
But there is a big difference between trying to insulate the scientific establishment from criticism and making
science fit to be a meaningful participant in society. The culture of science must fundamentally transform itself
– becoming more diverse and more open – or it will be unfit for the task ahead.
The global market for Crispr gene-editing products as medicine, to develop new crops (such as spicy tomatoes
or long-life mushrooms) and other uses is predicted to be $5.3bn by 2025. Continued advances in Crispr
precision and ease of use, like the just reported prime editing approach, are likely to make that number even
higher. Crispr gene editing has the potential to treat a myriad of monogenic diseases from sickle cell anaemia
to muscular dystrophy and cancer. Parents may one day be able to genetically customise their children’s health,
physical features and abilities. Crispr will be the genetic scissors that tailor the human gene pool.
With such power in hand, we must ask: whose vision of the future are we trying to create? Most of us support
a future where Crispr is used to treat over 10,000 monogenic diseases that impact 75 million people every year.
But should Crispr also be used to “correct” deafness, for example, and by extension, eradicate a rich and vibrant
deaf community? Should it be used to increase intelligence or muscle strength? What about changing children’s
eye colour? Or their sexuality? The future becomes blurry when Crispr applications move beyond treating
disease to instead perpetuate subjective perceptions of normalcy or supremacy. And gene-edited children
17
will be expensive, creating the potential to make the world more inequitable, to make those who are already
vulnerable more vulnerable, and to further entrench the dominant view of the privileged. That is a future we
must fight tooth and nail to avoid.
Experts in science, ethics and governance are making some efforts to ensure Crispr researchers heed these
societal concerns. The World Health Organization (WHO) has enlisted an expert advisory committee chaired
by the South African constitutional judge Edwin Cameron and Margaret Hamburg, head of the American
Association for the Advancement of Science, to develop global governance recommendations for human
genome editing. An International Commission on the Clinical Use of Germline Genome Editing has also been
established by the US National Academy of Medicine and National Academy of Sciences, and the UK’s Royal
Society. For far too long, regulatory officials and technology developers (either academic scientists or for-profit
companies) have steered the direction of technology.
In addition to clear research guidelines that support safe, therapeutic gene editing, I hope to see new
recommendations that can help redistribute decision-making power. An open-access online registry of
Crispr clinical trials, recently proposed by the WHO, will hopefully promote a more open and transparent
process. Venues are also needed where early and sustained public deliberation can take place to help
integrate the concerns of society in deciding how Crispr should be used.
However, guidelines for human gene editing were already in place prior to Lulu and Nana’s birth. Publicly
available research guidelines clearly stated that it was still too early to safely or ethically implant Crispr-edited
human embryos. Yet He has defended his experiments by arguing that he had “complied with all the criteria”
laid out by those guidelines. And it has since been revealed that multiple American and Chinese scientists knew
of He’s experimental intentions and yet allowed them to proceed. Guidelines for research and regulation can
only go so far to safeguard ethical use of Crispr.
Some argue that a moratorium on gene editing is needed until more effective guidelines are in place. But I
worry that introducing more guidelines will only treat the symptoms of a diseased scientific system – one that
lacks diversity in its scientists and is fuelled by competition. Most of modern science, and by extension the
technologies it creates, has been shaped by a very narrow and privileged worldview. The scientific enterprise
has been dominated by men – even today women make up only 28.8% of researchers worldwide as of last
June. It has been exclusionary to people of colour – a Nobel prize has yet to be awarded to a black scientist.
And it rarely includes people with disabilities or the very people scientists are attempting to “treat”. If our ultimate
goal is for Crispr to equitably serve society, then we need to make sure those who steer its development
realistically represent society. Scientists who come from historically marginalised backgrounds can introduce
much-needed critical perspectives.
We need to bring more diversity into the lab, but we also need to get scientists out of the lab and into society.
Scientists have often been isolated from the very issues and communities their research seeks to impact. We
need to support channels that allow diverse members of society to inform scientific research: channels
such as the proposed global observatory for gene editing and the Association for Responsible Research in
Genome Editing, which gather scientists with patient advocacy groups and disability activists to inform Crispr
research. More opportunities for scientists to learn from the public are also needed, such as Involve, a non-
profit organisation dedicated to public participation, and Editing Nature, an initiative I founded to empower
impacted communities in deciding how Crispr is used. As a vital first step, we must create a scientific community
that values diversity and openness.
Incentive structures – in large part created by scientific funding bodies, research institutions and publishers –
are fuelling unhealthy competition and opacity among scientists. In a battle to be the first to discover,
scientists are forced to shield their ideas and their research. This opacity hinders open collaborations within the
scientific community and with the public. As we saw with He’s experiments, this competition and opacity creates
a dangerous scenario when Crispr is involved. To safeguard the scientific enterprise, cooperativity and humility
need to instead become central virtues of science. Scientific incentive structures should reward scientists who
engage with the public and participate in cross-disciplinary collaborations. Meanwhile scientists should
be trained to appreciate the limits of their own knowledge, and to know when to incorporate outside
expertise and worldviews.
This work doesn’t stop at the research bench, or even in the classroom. Crispr holds the potential to forever
change the arc of humanity, making its ethical use everyone’s responsibility. As citizens, we must push for
medical school deans, professors, grant officers, journal editors, regulatory officials and those who design global
research guidelines to come from a wide variety of backgrounds. We must also pressure our governments to
uphold inclusive and open regulatory processes, as well as participating in public discourse to amplify
historically marginalised voices. Only in working together can we make science more open and inclusive, and
only then can its products benefit us all.
Natalie Kofler is a trained molecular biologist and lecturer in bioethics at Yale University and Harvard Medical School

18
[33] Sociogenomics is opening a new door to eugenics (extracts)
Nathaniel Comfort. MIT Technology Review. Oct 23, 18
www.technologyreview.com/s/612275/sociogenomics-is-opening-a-new-door-to-eugenics/

Want to predict aggression? Neuroticism? Risk aversion? Authoritarianism? Academic achievement? This is
the latest promise from the burgeoning field of sociogenomics.
There have been many “DNA revolutions” since the discovery of the double helix, and now we’re in the midst
of another. A marriage of the social and natural sciences, it aims to use the big data of genome science—
data that’s increasingly abundant thanks to genetic testing companies like 23andMe—to describe the genetic
underpinnings of the sorts of complex behaviours that interest sociologists, economists, political scientists,
and psychologists. The field is led by a group of mostly young, often charismatic scientists who are willing to
write popular books and op-eds, and to give interviews and high-profile lectures. This work shows that the
nature-nurture debate never dies—it is just cloned and raised afresh in a new world.
Advocates of sociogenomics envision a prospect that not everyone will find entirely benevolent: health “report
cards,” based on your genome and handed out at birth, that predict your risk of various diseases and
propensity for different behaviours. In the new social sciences, sociologists will examine the genetic
component of educational attainment and wealth, while economists will envision genetic “risk scores” for
spending, saving, and investment behaviour. Without strong regulation, these scores could be used in school
and job applications and in calculating health insurance premiums. Your genome is the ultimate pre-existing
condition.
Such a world could be exciting or scary (or both). But sociogenomicists generally focus on the sunny side.
And anyway, they say with a shrug, there’s nothing we can do about it. “The genie is out of the bottle,” writes
the educational psychologist Robert Plomin, “and cannot be stuffed back in again.”
IS THIS WHAT THE SCIENCE SAYS, IN FACT? AND IF IT IS, IS IT A VALID BASIS FOR SOCIAL POLICY? Answering these
questions demands setting this new form of hereditarian social science in context—considering not merely
the science itself but the social and historical perspective. Doing so can help us understand what’s at stake
and what the real risks and benefits are likely to be …
Sociogenomics is the latest chapter in a tradition of hereditarian social science dating back more than 150
years. Each iteration has used new advances in science and unique cultural moments to press for a specific
social agenda. It has rarely gone well. The originator of the statistical approach that sociogenomicists use
was Francis Galton, a cousin of Charles Darwin. Galton developed the concept and method of linear
regression—fitting the best line through a curve—in a study of human height. Like all the traits he studied,
height varies continuously, following a bell-curve distribution. Galton soon turned his attention to personality
traits, such as “genius,” “talent,” and “character.” As he did so, he became increasingly hereditarian. It was
Galton who gave us the idea of nature versus nurture. In his mind, despite the “sterling value of nurture,”
nature was “by far the more important.”
Galton and his acolytes went on to invent modern biostatistics—all with human improvement in mind. Karl
Pearson, Galton’s main protégé (who invented the correlation coefficient, a workhorse statistic of GWASs and
hence of sociogenomics), was a socialist who believed in separating sex from love. The latter should be spread
around liberally, the former tightly regulated to control who bred with whom—that is, for eugenic ends.
The point is that eugenics was not, as some claim, merely an unfortunate bit of specious science. It was
central to the development of biological statistics. This entanglement runs down the history of hereditarian
social science, and today’s sociogenomicists, like it or not, are heir to it.
Early in the 20th century, a vicious new strain of eugenics emerged in America, based on the new science of
Mendelian genetics. In the context of Progressive-era reformist zeal, belief in a strong government, and faith
in science to solve social problems, eugenics became the basis of coercive social policy and even law.
After prominent eugenicists canvassed, lobbied, and testified on their behalf, laws were passed in dozens of
states banning “miscegenation” or other “dysgenic” marriage, calling for sexual sterilization of the unfit, and
throttling the stream of immigrants from what certain politicians today might refer to as “shithole countries.”
At the end of the 1960s, the educational psychologist Arthur Jensen published an enormous article in
the Harvard Educational Review arguing that Negro children (the term of the day) were innately less intelligent
than white children. His policy action item: separate and unequal school tracks, so that African-American
children would not become frustrated by being over-challenged with abstract reasoning. What became known
as “Jensenism” has resurfaced every few years, in books such as Charles Murray and Richard
Herrnstein’s The Bell Curve (1994) and the journalist Nicholas Wade’s A Troublesome Inheritance (2014).
Given the social and political climate of 2018, today would seem a particularly inauspicious time to undertake
a new and potentially vastly more powerful expression of genetic determinism. True, the research papers,
white papers, interviews, books, and news articles I’ve read on the various branches of sociogenomics
suggest that most researchers want to move past the racism and social stratification promoted by earlier
hereditarian social scientists. They downplay their results, insist upon avoiding bald genetic determinism, and
19
remain inclusive in their language. But, as in the past, fringe groups have latched onto sociogenomic research
as evidence for their hostile claims of white superiority and nationalism …
“Eugenics is not safely in the past,” wrote Kathryn Paige Harden, a developmental behaviour geneticist at the
University of Texas, in a New York Times op-ed earlier this year. Harden lamented the rise of the so-called
human biodiversity movement (referring to it as “the eugenics of the alt-right”), with its ties to white
supremacy and its specious claims to scientific legitimacy. Members of this movement, she wrote,
“enthusiastically tweet and blog about discoveries in molecular genetics that they mistakenly believe support
the ideas that inequality is genetically determined; that policies like a more generous welfare state are thus
impotent; and that genetics confirms a racialized hierarchy of human worth.” … Indeed, the human biodiversity
crowd and other so-called “race realists” love sociogenomics …
To be clear: I am not saying that sociogenomicists are racists. I am saying that their work has serious social
implications outside the lab, and that too few in the field are taking those problems seriously.
Genetics has an abysmal record for solving social problems. In 1905, the French psychologist Simon Binet
invented a quantitative measure of intelligence—the IQ test—to identify children who needed extra help in
certain areas. Within 20 years, Binet was horrified to discover that people were being sterilised for scoring
too low, out of a misguided fear that people of subnormal intelligence were sowing feeblemindedness genes
like so much seed corn.
What steps can we take to prevent sociogenomics from suffering the same fate? How do we ensure
that polygenic scores for educational attainment are used to offer extra help tailored to those who need it—
and ensure that they don’t become tools of stratification? Here’s one way: when the evolutionary biologist
Graham Coop and his student Jeremy Berg published a GWAS paper on the genetics of human height, they
took the extraordinary step of writing a 1,500-word blog post about what could and could not be legitimately
inferred from their paper.
Why isn’t this more common? The field needs more people like Coop—and fewer cheerleaders. It needs
scientists who reckon with the social implications of their work, especially its potential for harm—scientists
who take seriously the social critique of science, who understand their work in both its scientific and historical
contexts. It is such people who stand the best chance of using this potent knowledge productively. For
scientists studying human social genomics, doing so is a moral responsibility.

SHOULD PARENTS BE ALLOWED TO CONDUCT GENETIC TESTS ON THEIR EMBRYOS AND CHILDREN?

[34] The world’s first Gattaca baby tests are finally here (extracts)
Antonio Regalado. MIT Technology Review. Nov 8, 19
www.technologyreview.com/s/614690/polygenic-score-ivf-embryo-dna-tests-genomic-prediction-gattaca/

Anxious couples are approaching fertility doctors in the US with requests for a hotly debated new genetic test
being called “23andMe, but on embryos.” The baby-picking test is being offered by a New Jersey start-up
company, Genomic Prediction, whose plans we first reported on two years ago. The company says it can use
DNA measurements to predict which embryos from an IVF procedure are least likely to end up with any of 11
different common diseases. In the next few weeks it's set to release case studies on its first clients.
Handed report cards on a batch of frozen embryos, parents can use the test results to try to choose the
healthiest ones. The grades include risk estimates for diabetes, heart attacks, and five types of cancer.
According to flyers distributed by the company, it will also warn clients about any embryo predicted to become
a person who is among the shortest 2% of the population, or who is in the lowest 2% in intelligence. The test
is straight out of the science fiction film Gattaca, a movie that’s one of the inspirations of the start-up’s
CEO, Laurent Tellier. The company’s other cofounders are testing expert Nathan Treff and Stephen Hsu, a
Michigan State University administrator and media pundit.
So far, fertility centres have not leaped at the chance to offer the test, which is new and unproven. Instead,
prospective parents are learning about the designer baby reports through word of mouth or news articles and
taking the company’s flyer to their doctors. One such couple recently turned up at New York University’s fertility
centre in Manhattan, says David Keefe, who is chairman of obstetrics and gynaecology there. “Right off the
bat it raises all kind of questions about eugenics,” he says … couples who think they can choose kids from a
menu could be disappointed. “It’s fraught with parenting issues,” he says. “So many couples just need to feel
they have done enough.”
The company’s project remains at a preliminary stage. While some embryos have been tested by the
company, Tellier, the CEO, says he is unsure if any have yet been used to initiate a pregnancy … Our reporting
suggests the company has struggled both to validate its predictions and to interest fertility centres in them. Its
customers so far seem to be a scattering of individuals from around the world with specific family health
worries. The company declined to name them, citing confidentiality …
Genomic Prediction thinks it can piggyback on the most common type of “preimplantation” embryo test, which

20
screens days-old embryos for major chromosome abnormalities, called aneuploidies. Such testing has
become widespread in fertility centres for older mothers and is already employed in nearly a third of IVF
attempts in the US. The new predictions could be added to it. Fertility centres can also order tests for specific
genetic diseases, such as cystic fibrosis, where a gene measurement will give a definite diagnosis of what
embryo inherited the problem The new polygenic tests are more like forecasts, estimating risk for common
diseases on the basis of variations in hundreds or thousands of genes, each with a small effect.
In a legal disclaimer, the company says it can’t guarantee anything about the resulting child and that the
assessment “is NOT a diagnostic test.” …
Treff, the start-up’s chief scientist, believes even fertile couples might begin to undergo IVF just so they can
select the best child. “I do believe this is going to be the future … we can start to ... reduce the incidence of
disease in humans through IVF,” Treff told an audience at a conference in China last month …
Genomic Prediction has so far won the most attention for the possibility of using genetic scores to pick the
most intelligent children from a petri dish. It has tried to distance itself from the controversial concept, but that’s
been difficult because Hsu, a cofounder, is frequently in the media discussing the idea. Hsu told The Guardian
this year that “accurate IQ predictors will be possible if not in the next five years, the next 10 years certainly.”
He says other countries, or the ultra-wealthy, might be the first to try to boost IQ in their kids this way.
During his talk in China, Treff called improving intelligence via embryo selection an application that “many
people think is unethical." In private, Treff tells other scientists he thinks it's doable, but wants to promote the
technology for medical purposes only. For now, the company is limiting itself to alerting parents to embryos it
predicts will be the least intelligent, with the highest chance of an IQ which qualifies as “intellectual disability”
according to psychiatric manuals.
Some experts see a transparent manoeuvre to avoid controversy. “They say they’re going to test for the
medical condition of intellectual disability, not for the smartest embryos, because they know people are going
to object to that,” says Laura Hercher, who trains genetic counsellors at Sarah Lawrence College. “They are
trying to slide, slide into traits without admitting as much.” …
At NYU, Keefe says the test raises profound questions. His centre is in Midtown Manhattan, just blocks from
a hub of finance and legal offices. He says his clientele are typically well-off professionals, “people who have
programmed everything” in life and feel “they are in control.” They sometimes even ask out loud if a mere
doctor is smart enough to help them.
The case he is working on involves a family that has two children with autism. They now want a child without
the condition, and they hope the intelligence feature of the test will help them. Treff says he counselled the
family that the Genomic Prediction test wasn’t likely to help—autism can have specific genetic causes that the
intelligence prediction isn’t designed to capture. Yet the family remains interested. They want to do whatever
they can to have a healthy kid. Keefe says he’s so far supporting their choice, but he is concerned by all that
it implies. “There is potential psychological harm to the kid,” he says. “God forbid the kids ends up with autism
after spending this money.”

SHOULD DOCTORS HAVE A LEGAL DUTY TO WARN RELATIVES OF THEIR GENETIC RISKS?

[35] Huntington's disease: Woman who inherited gene sues NHS


Fergus Walsh. BBC. Nov 18, 19
www.bbc.com/news/health-50425039

A woman who was not informed that her father had a fatal, inherited brain disorder has told the High Court
that she would have had an abortion if she'd known at the time of her pregnancy. She is suing three NHS
trusts saying they owed a duty of care to tell her about her dad's Huntington's disease.
Any child of someone with the condition has a 50% chance of inheriting it. Doctors suspected the diagnosis
after her father shot dead her mother and was detained under the Mental Health Act. The father tested positive
for Huntington's Disease, which is caused by a faulty gene and leads to the progressive loss of brain cells,
affecting movement, mood and thinking skills. It can also cause aggressive behaviour. He told doctors he did
not want his daughter told about his diagnosis, fearing she might kill herself or have an abortion if she found
out.
The claimant is known as ABC in order to protect the identity of her own daughter, who is now nine. ABC only
found out about that her father had Huntington's Disease, a progressive, incurable condition, four months after
giving birth. At the High Court she said she'd been told about her father's condition by accident. "I was utterly
traumatised by the way I was told", she said. "I had no family support and was left to Google the condition."
ABC eventually had a test and found that she also carries the faulty gene. Her daughter, who's not been
tested, has a 50:50 chance of inheriting it from her. The symptoms of Huntington's Disease usually appear
between the ages of 30 and 50. ABC, who's now in her 40s, told the court: "I'm now the prime age to get
unwell. The future is absolutely terrifying." She told the High Court that had she known during her pregnancy

21
that she has the gene for Huntington's she would definitely have had an abortion. She is suing St George's
and two other NHS Trusts involved in the family's care, for £345,000 in damages.
In written submissions Philip Havers QC on behalf of the trusts, said the question for the court was whether
there was "a duty to disclose to her confidential information about her father against his express wishes" which
he said was "plainly not the case". The court heard that after ABC had found out about her father's disorder,
her sister also became pregnant. Philip Havers QC for the trusts said ABC had asked doctors not to tell her
sister that their father had tested positive for Huntington's. Mr Havers said it was "a bit rich" for ABC to be
bringing this claim for damages. He said she could have told her sister in time for her to have a termination,
but that was what she was complaining about for herself.
ABC said at the time, she'd been "utterly terrified" about the impact on her sister adding that the situation
should have been managed by health professionals.
This case was first argued at the High Court in 2015 when a judge ruled that a full hearing should not go
ahead. The judgement said there was "no reasonably arguable duty of care" owed to ABC. But in 2017, the
Court of Appeal reversed that decision and said the case should go to trial. ABC is now suing St George's
Healthcare NHS Trust in south-west London and St George's Mental Health NHS Trust and Sussex
Partnership NHS Foundation Trust for damages.
If ABC wins the case, it would trigger a major shift in the rules governing patient confidentiality, and raise
questions over the potential duty of care owed to family members following genetic testing. A
spokesperson for St George's Healthcare NHS Trust said: "This case raises complex and sensitive issues in
respect of the competing interests between the duty of care and the duty of confidentiality. It will be for the
court to adjudicate on those issues during the trial."
The case continues.

SHOULD COMPANIES BE ABLE TO SET HIGH PRICES FOR THE LIFE-SAVING TREATMENTS THEY HAVE CREATED?

[36] $2.1m Novartis gene therapy to become world's most expensive drug
The Guardian. May 25, 19
www.theguardian.com/science/2019/may/25/21m-novartis-gene-therapy-to-become-worlds-most-expensive-drug

Swiss drugmaker Novartis has received US approval for its spinal muscular atrophy gene therapy Zolgensma
– pricing the one-time treatment at a record $2.125m. The Food and Drug Administration on Friday
approved Zolgensma for children under the age of two with SMA, including those not yet showing symptoms.
The approval covers babies with the deadliest form of the inherited disease as well as those with types where
debilitating symptoms may set in later. “This is potentially a new standard of care for babies with the most
serious form of SMA,” said Dr Emmanuelle Tiongson, a Los Angeles paediatric neurologist who has provided
Zolgensma to patients under an expanded access programme.
SMA is the leading genetic cause of death in infants. The disease often leads to paralysis, breathing difficulty
and death within months for babies born with the most serious type 1 form. SMA affects about one in every
10,000 live births, with 50% to 70% having type 1. Novartis said it has so far treated more than 150 patients
with Zolgensma. Its chief executive, Vas Narasimhan, described Zolgensma as a near-cure for SMA if
delivered soon after birth. But data proving its durability extends to only about five years.
A review in April by the independent Institute for Clinical and Economic Review (Icer), concluded Novartis’s
previous $5m-per-patient value estimate for Zolgensma was excessive. But on Friday, Icer said that based
on Novartis’s additional clinical data, the broad FDA label and its launch price, it believed the drug fell within
the upper bound of its range for cost-effectiveness. Novartis executives have defended the price, saying a
one-time treatment is more valuable than expensive long-term treatments that cost several hundred thousand
dollars a year. The therapy uses a virus to provide a normal copy of the SMN1 gene to babies born with a
defective gene and is delivered by infusion.
Novartis is expecting European and Japanese approval later this year. Zolgensma will compete with Biogen’s
Spinraza, the first approved treatment for SMA. Spinraza, approved in late 2016, requires infusion into the
spinal canal every four months. Its list price of $750,000 for the initial year and $375,000 annually thereafter
was also deemed excessive by Icer.
Wall Street analysts have forecast Zolgensma sales of $2bn by 2022, according to a Refinitiv survey. Spinraza
sales hit $1.7bn last year, and are predicted to rise to $2.2bn in 2022.

[37] $80m boost to turn manufacture of [38] First Singapore case of cannabis-
cells into a big money-spinner derived medication allowed
Chang Ai-Lien. The Straits Times. Mar 28, 19 Tan Tam Mei. The Sunday Times. Dec 1, 19
www.straitstimes.com/singapore/80m-boost-to-turn- www.straitstimes.com/singapore/health/first-spore-case-of-
manufacture-of-cells-into-a-big-money-spinner cannabis-derived-medication-allowed

22
DO YOU SHARE THE WRITER’S CONVICTION AND ENTHUSIASM?
[39] Engineering the future of medicine
Lim Chwee Teck. The Straits Times. Feb 8, 2018
www.straitstimes.com/singapore/engineering-the-future-of-medicine

Singapore is facing a greyer future, with its accompanying health woes. Three in five people will have
contracted cancer by the time they reach 65, and diabetes is a serious concern. Health problems will put a
tremendous strain on our healthcare system and infrastructure. This is where biomedical technology can
make a significant impact. There is a slew of emerging technologies that are slowly but surely disrupting the
way healthcare and medicine are being practised.
For example, personalised or precision medicine is changing how patients are being diagnosed and
treated, especially for diseases such as cancer. The idea is to administer the right drug to the right patient, at
the right dosage and the right time. By sequencing a cancer patient's genome, we can now better treat him
by matching drugs to specific treatable mutations that this patient may be suffering from. The genetic
sequencing can be performed on circulating tumour DNA strands or cancer cells obtained from blood - which
is known as a liquid biopsy. This is less invasive and can be done more frequently than a tumour biopsy. With
this technique, we can obtain real-time feedback as to the condition of the patient through frequent sampling
and testing, something not possible with a highly-invasive tumour biopsy. Trials being conducted on patients
are showing promise, and liquid biopsy and precision medicine could potentially disrupt how cancer can be
managed and treated.
Bioprinting is another technology that may one day solve the organ shortage problem. Imagine being able to
print tissues or organs that can be specifically tailored to the needs of a particular patient. Progress in this
area is slow, however, as researchers grapple with printing cells and concocting growth factors that will not
only ensure the cells survive, but also grow in the way they should, to form the proper structure so they can
perform the functions of the tissues or organs needed. Nevertheless, there are start-ups which are already
making headway in bioprinting human tissue for the liver and kidney.
Another area of interest is big data. From the use of the smart phone, computer and fitness tracker to a visit
to a clinic, there is unprecedented continual collection of information relating to what we do and our health.
For example, by sequencing our genetic make-up at birth, we hope to better understand as much as possible
about ourselves as early as possible. This could allow us to pick up early signs of illness so we can either
prevent it or treat it early.
Linked to big data is artificial intelligence (AI). A recent survey in Britain suggests people are heading
towards more self-diagnosis and prefer to do an Internet search before seeing a doctor when they fall sick. I
believe this is the case in Singapore, and it signals a trend of how we are soon going to interact with the
doctors. This could mean fewer visits to clinics and hospitals. There are start-ups which provide virtual
consultations with doctors and healthcare professionals through text and video messaging, via a mobile app.
Users can also receive their drug prescription, be referred to a specialist, or book an examination at a
healthcare facility.
But AI is much more than this. It can be employed for diagnostic assistance, especially where a patient's case
is complex or rare, by quickly suggesting a diagnosis based on the patient's data - as it can access all the
patient's medical records and other related databases not just locally, but also around the world. In terms of
sensing and monitoring, AI can warn of changes in a patient's condition in real-time as it tracks data generated
from sensors that are being worn by patients. AI can also assist doctors in terms of image screening and
interpretation. This is especially since medical images obtained from X-rays to MRI scans can now be
massively and rapidly analysed and interpreted. In fact, there are cases where AI can pick up details on an
abnormal growth that our naked eye may not be able to see.
These emerging technologies are giving us a peek into what lies ahead for medicine and healthcare. What
is science fiction today may soon be reality. After all, it was not long ago that mobile shopping or the driverless
car was unthinkable. So, will the doctor be heading for extinction - a concern of today's retailers, taxi drivers
and booksellers? I think not. Doctors will evolve as they rely more and more on technology to help them make
more accurate diagnoses and better decisions on how to treat patients. The ultimate aim of the medical
technology is not to take over the role of doctors, but to allow them to do a better job. There would be none of
today's modern hospitals without the biomedical technologies being used, be it a simple thermometer or a
complex X-ray machine.
A senior clinician once said to me: "A doctor can only treat one patient anywhere at any one time, but
biomedical technology developed by an engineer can enable treatment of thousands of patients anywhere
and at any one time." As we face the challenges of an ageing population, biomedical technology will form an
integral part of the healthcare solution.

23
DO YOU HAVE HIGH HOPES FOR REGENERATIVE MEDICINE OR DO YOU PREFER TO LET NATURE TAKE ITS COURSE?
[40] Has this scientist finally found the fountain of youth?
Erika Hyasaki. MIT Technology Review. Aug 8, 19
www.technologyreview.com/s/614074/scientist-fountain-of-youth-epigenome/

The black mouse on the screen sprawls on its belly, back hunched, blinking but otherwise motionless. Its
organs are failing. It appears to be days away from death. It has progeria, a disease of accelerated aging,
caused by a genetic mutation. It is only three months old.
I am in the laboratory of Juan Carlos Izpisúa Belmonte, a Spaniard who works at the Gene Expression
Laboratory at San Diego’s Salk Institute for Biological Studies, and who next shows me something hard to
believe. It’s the same mouse, lively and active, after being treated with an age-reversal mixture. “It completely
rejuvenates,” Izpisúa Belmonte tells me with a mischievous grin. “If you look inside, obviously, all the organs,
all the cells are younger.”
Izpisúa Belmonte, a shrewd and soft-spoken scientist, has access to an inconceivable power. These mice, it
seems, have sipped from a fountain of youth. Izpisúa Belmonte can rejuvenate aging, dying animals. He can
rewind time. But just as quickly as he blows my mind, he puts a damper on the excitement. So potent was the
rejuvenating treatment used on the mice that they either died after three or four days from cell malfunction or
developed tumours that killed them later. An overdose of youth, you could call it.
The powerful tool that the researchers applied to the mouse is called “reprogramming.” It’s a way to reset
the body’s so-called epigenetic marks: chemical switches in a cell that determine which of its genes are
turned on and which are off. Erase these marks and a cell can forget if it was ever a skin or a bone cell, and
revert to a much more primitive, embryonic state. The technique is frequently used by laboratories to
manufacture stem cells. But Izpisúa Belmonte is in a vanguard of scientists who want to apply reprogramming
to whole animals and, if they can control it precisely, to human bodies.
Izpisúa Belmonte believes epigenetic reprogramming may prove to be an “elixir of life” that will extend human
life span significantly. Life expectancy has increased more than twofold in the developed world over the
past two centuries. Thanks to childhood vaccines, seat belts, and so on, more people than ever reach natural
old age. But there is a limit to how long anyone lives, which Izpisúa Belmonte says is because our bodies
wear down through inevitable decay and deterioration. “Aging,” he writes, “is nothing other than molecular
aberrations that occur at the cellular level.” It is, he says, a war with entropy that no individual has ever won.
But each generation brings new possibilities, as the epigenome gets reset during reproduction when a new
embryo is formed. Cloning takes advantage of reprogramming, too: a calf cloned from an adult bull contains
the same DNA as the parent, just refreshed. In both cases, the offspring is born without the accumulated
“aberrations” that Izpisúa Belmonte refers to. What Izpisúa Belmonte is proposing is to go one step better still,
and reverse aging-related aberrations without having to create a new individual. Among these are changes
to our epigenetic marks—chemical groups called histones and methylation marks, which wrap around a cell’s
DNA and function as on/off switches for genes. The accumulation of these changes causes the cells to function
less efficiently as we get older, and some scientists, Izpisúa Belmonte included, think they could be part of
why we age in the first place. If so, then reversing these epigenetic changes through reprogramming may
enable us to turn back aging itself.
Izpisúa Belmonte cautions that epigenetic tweaks won’t “make you live forever,” but they might delay your
expiration date. As he sees it, there is no reason to think we cannot extend human life span by another 30 to
50 years, at least. “I think the kid that will be living to 130 is already with us,” Izpisúa Belmonte says. “He has
already been born. I’m convinced.”
The treatment Izpisúa Belmonte gave his mice is based on a Nobel-winning discovery by the Japanese stem-
cell scientist Shinya Yamanaka. Starting in 2006, Yamanaka demonstrated how adding just four proteins to
human adult cells could reprogramme them so that they look and act like those in a newly formed embryo.
These proteins, called the Yamanaka factors, function by wiping clean the epigenetic marks in a cell, giving it
a fresh start. “He went backwards in time,” Izpisúa Belmonte says. All the methylation marks, those epigenetic
switches, “are erased,” he adds. “Then you’re starting life again.” Even skin cells from centenarians, scientists
have found, can be rewound to a primitive, youthful state. The artificially reprogrammed cells are called
induced pluripotent stem cells, or IPSCs. Like the stem cells in embryos, they can then turn into any kind
of body cell—skin, bone, muscle, and so on—if given the right chemical signals.
To many scientists, Yamanaka’s discovery was promising mainly as a way to manufacture replacement
tissue for use in new types of transplant treatments. In Japan, researchers began an effort to reprogramme
cells from a Japanese woman in her 80s with a blinding disease, macular degeneration. They were able to
take a sample of her cells, return them to an embryonic state with Yamanaka’s factors, and then direct them
to become retinal cells. In 2014, the woman became the first person to receive a transplant of such lab-made
tissue. It didn’t make her vision sharper, but she did report it as being “brighter,” and it stopped deteriorating.
Before then, though, researchers at the Spanish National Cancer Research Centre had already taken the
technology in a new direction when they studied mice whose genomes harboured extra copies of the
24
Yamanaka factors. Turning these on, they demonstrated that cell reprogramming could actually occur inside
an adult animal body, not only in a laboratory dish. The experiment suggested an entirely new form of
medicine. You could potentially rejuvenate a person’s entire body. But it also underscored the dangers. Clear
away too many of the methylation marks and other footprints of the epigenome and “your cells basically lose
their identity,” says Pradeep Reddy, a staff researcher at Salk who worked on these experiments with Izpisúa
Belmonte. “You are erasing their memory.” These cellular blank slates can grow into a mature, functioning
cell, or into one that never develops the ability to perform its designated task. It can also become a cancer
cell. That’s why the mice I saw in Izpisúa Belmonte’s lab were prone to sprouting tumours. It proved that
cellular reprogramming had indeed occurred inside their bodies, but the results were usually fatal.
Izpisúa Belmonte believed there might be a way to give mice a less lethal dose of reprogramming. He was
inspired by salamanders, which can regrow an arm or tail. Researchers have yet to determine exactly how
amphibians do this, but one theory is that it happens through a process of epigenetic resetting similar to what
the Yamanaka factors achieve, though more limited in scope. With salamanders, their cells “just go back a
little bit” in time, Izpisúa Belmonte says.
Could the same thing be done to an entire animal? Could it be rejuvenated just enough? In 2016, the team
devised a way to partially rewind the cells in mice with progeria. They genetically modified the mice to produce
the Yamanaka factors in their bodies, just as the Spanish researchers had done; but this time, the mice would
produce those factors only when given an antibiotic, doxycycline. In Izpisúa Belmonte’s lab, some mice were
allowed to drink water containing doxycycline continuously. In another experiment, others got it just for two
days out of every seven. “When you give them … doxycycline, expression of the genes starts,” explains
Reddy. “The moment you remove it, the expression of the genes stops. You can easily turn it on or off.”
The mice that drank the most, like the one Izpisúa Belmonte showed me, quickly died. But the mice that drank
a limited dose did not develop tumours. Instead, they became more physically robust, their kidneys and
spleens worked better, and their hearts pumped harder. In all, the treated mice also lived 30% longer than
their littermates. “That was the benefit,” Izpisúa Belmonte says. “We don’t kill the mouse. We don’t generate
tumours, but we have our rejuvenation.”
When Izpisúa Belmonte published his report in the journal Cell, describing the rejuvenated mice, it seemed to
some as if Ponce de Leon had finally spotted the fountain of youth. “I think Izpisúa Belmonte’s paper woke a
lot of people up,” says Michael West, CEO of AgeX, which is pursing similar aging reversal technology. “All
of a sudden all of the leaders in aging research are like, ‘Oh, my gosh, this could work in the human body.’”
To West, the technology offers the prospect that humans, like salamanders, could regenerate tissues or
damaged organs. “Humans have that ability too, when we are first forming,” he says. “So if we can reawaken
those pathways ... wow!” To others, however, the evidence for rejuvenation is plainly in its infancy. Jan
Vijg, chair of the genetics department at the Albert Einstein College of Medicine in New York City, says aging
consists of “hundreds of different processes” to which simple solutions are unlikely. Theoretically, he believes,
science can “create processes that are so powerful they could override all of the other ones.” But he adds,
“We don’t know that right now.”
An even broader doubt is whether the epigenetic changes that Izpisúa Belmonte is reversing in his lab are
really the cause of aging or just a sign of it—the equivalent of wrinkles in aging skin. If so, Izpisúa
Belmonte’s treatment might be like smoothing out wrinkles, a purely cosmetic effect. “We have no way of
knowing, and there is really no evidence, that says the DNA methylation [is] causing these cells to age,” says
John Greally, another professor at Einstein. The notion that “if I change those DNA methylations, I will be
influencing aging,” he says, “has red flags all over it.”
One other fundamental question hangs over Izpisúa Belmonte’s findings: while he succeeded in rejuvenating
mice with progeria, he hasn’t done it in normal aged animals. Progeria is an illness due to a single DNA
mutation. Natural aging is much more complex, says Vittorio Sebastiano, an assistant professor at the
Stanford Institute for Stem Cell Biology and Regenerative Medicine. Would the rejuvenation technique work
in naturally aged animals and in human cells? He says Izpisúa Belmonte’s research so far leaves that crucial
question unanswered. Izpisúa Belmonte’s team is working to answer it. Experiments to rejuvenate normal
mice are under way. But because normal mice live as long as two and a half years, whereas those with
progeria live three months, the evidence is taking longer to gather. “And if we have to modify any experimental
condition,” Reddy says, “then the whole cycle will have to be repeated.”
Wholesale rejuvenation, then, is still far off, if it will ever come at all. But more limited versions of it,
targeted to certain diseases of aging, might be available within a few years.
If the Yamanaka factors are like a scattergun that wipes out all the epigenetic marks associated with aging,
the techniques now being developed at Salk and in other labs are more like sniper rifles. The goal is to allow
researchers to switch off a specific gene that causes a disease, or switch on another gene that can alleviate
it. Hsin-Kai Liao and Fumiyuki Hatanaka spent four years in Izpisúa Belmonte’s lab adapting CRISPR-Cas9,
the famed DNA “editing” system, to instead act as a volume control knob. Whereas the original CRISPR lets
researchers eliminate an unwanted gene, the adapted tool allows them to leave the genetic code untouched
but determine whether a gene is turned on or off. The lab has tested this tool on mice with muscular dystrophy,
25
which lack a gene that’s crucial in maintaining muscle. Using the epigenome editor, the researchers cranked
up the output of another gene that can play a substitute role. The mice they treated did better on grip tests,
and their muscles “had become much larger,” Liao remembers.
Another result of this kind came from beyond the Salk campus, at the University of California, Irvine.
Researcher Marcelo Wood claims that activating a single gene in old mice improves their memory in a test
involving moving objects. “We restored long-term memory function in those animals,” says Wood, who
published the results in Nature Communications. After a single epigenetic block is removed, says Wood, “the
genes for memory—they all fire. Now that animal perfectly encodes that information straight into long-term
memory.” Similarly, researchers at Duke University have developed an epigenetic editing technique (not yet
tested on animals) to turn down the volume on a gene implicated in Parkinson’s disease. Another Duke team
brought down the levels of cholesterol in mice by turning off a gene that regulates it. Izpisúa Belmonte’s lab
itself, as well as experimenting with muscular dystrophy, has worked on rolling back the symptoms of diabetes,
kidney disease, and the loss of bone cartilage, all using similar methods.
The first human tests of these techniques are likely to happen in the next few years. Two companies pursuing
the technology are AgeX and Turn Biotechnologies, a start-up cofounded by Sebastiano from Stanford. AgeX,
says West, its CEO, is looking to target heart tissues, while Turn, according to Sebastiano, will begin by
seeking regulatory clearance to test treatments for osteoarthritis and aging-related muscle loss.
Meanwhile GenuCure, a biotech company founded by Ilir Dubova, a former researcher at Salk, is raising funds
to pursue an idea for rejuvenating cartilage. The company has a “cocktail,” Dubova says, that will be injected
into the knee capsule of people with osteoarthritis, perhaps once or twice a year. Such a treatment could take
the place of expensive knee replacement surgeries. “After injection, these … genes that were silenced due to
aging would be turned on, thanks to our witchcraft, and start the rejuvenation process of the tissue,” Dubova
says. “I think turning back the clock is an appropriate way to explain it.”

WOULD YOU ACCEPT THE USE OF ANIMAL-TO-HUMAN XENOTRANSPLANTATION IF IT WERE MADE SAFE ENOUGH?

[41] Meet the pigs that could solve the human organ transplant crisis (extracts)
Karen Weintraub. MIT Technology Review. Nov 1, 19
www.technologyreview.com/s/614653/meet-the-pigs-that-could-solve-the-human-organ-transplant-crises/

Different types of tissues from genetically engineered pigs are already being tested in humans. In China,
researchers have transplanted insulin-producing pancreatic islet cells from gene-edited pigs into people with
diabetes. A team in South Korea says it’s ready to try transplanting pig corneas into people, once it gets
government approval. And at Massachusetts General Hospital, researchers announced in October that they
had used gene-edited pig skin as a temporary wound covering for a person with severe burns. The skin patch,
they say, worked as effectively as human skin, which is much harder to obtain.
But when it comes to life-or-death organs, like hearts and livers, transplant surgeons still must rely on human
parts. One day, the dream goes, genetically modified pigs like this sow will be sliced open, their hearts,
kidneys, lungs and livers sped to transplant centres to save desperately sick patients from death.
Today in the United States, 7,300 people die each year because they can’t find an organ donor—two-thirds
of them for want of a kidney. In many cases, the only hope is someone else’s tragedy: an accident that kills
someone whose organs can be harvested. Surgeons looking for another source of organs at first looked to
monkeys, because they’re the animals most similar to us. In 1984, a little girl known as Baby Fae received a
baboon heart but died 20 days later, after her immune system attacked it. Baby Fae’s short life and quick
death received global attention; many condemned the idea of killing our closest animal relatives to save
ourselves. An opinion piece by a cardiologist in the Washington Post described the procedure as “medical
adventurism.” Another, in the Journal of Medical Ethics, was headlined “Baby Fae: A beastly business.”
Then, in the 1990s, researchers and biotech companies turned to pigs as the donor of choice. Since we eat
pigs (120 million of them a year in the US alone), taking their organs seemed less morally fraught to many.
Scientifically, their organs are roughly the right size, with similar anatomy, and pigs reach adulthood in about
six months—much faster than primates. But a problem arose: pigs harbour viruses that might make the jump
to people. What’s more, with the simple genetic engineering available at the time, the transplanted organs
didn’t last long when they were tested in monkeys. They were simply, genetically speaking, too foreign.
More than two decades later, advances in genetic engineering have revived the prospect of so-called
xenotransplants. The hottest source of debate in the field: exactly how many gene edits are needed in pigs
like these to overcome the species barrier. A well-funded US company, eGenesis, which leads the more-is-
better-camp, says it has made a “double-digit” number of changes to the pigs it raises with a sister company
in China … The Germans at the Munich [Centre for Innovative Medical Models ] facility are in the less-is-more
camp. The pigs they work with have three key genetic modifications originally made more than a decade
ago—all designed to keep baboons and humans from rejecting their organs. Knocking out a gene that
produces a sugar called galactosyltransferase prevented the recipient’s immune system from immediately

26
rejecting an organ from a different species. The second change added a gene expressing human CD46, a
protein that helps the immune system attack foreign invaders without overreacting and causing autoimmune
disease; the third introduced a gene for a protein called thrombomodulin, which prevents the blood clots that
would otherwise destroy the transplanted organ.
A smaller number of edits can be better controlled and measured, and their effects are easier to document …
If something goes wrong, as often happens in xenotransplantation, it will be clear where the issue lies. With
more edits come more potential problems. “At some point, you are in a situation that you have no idea what
an additional genetic modification does,” …
In 2018, the hearts of pigs from the Munich centre were transplanted into 14 baboons. Two of the monkeys
survived for six months, the longest any animal has lived with a heart from another species. In a report in
Nature last December, the German researchers described their achievement as “a milestone on the way to
clinical cardiac xenotransplantation.”
It isn’t cheap to create a gene-edited pig and then raise it to the standard required by the US Food and Drug
Administration and other agencies that would regulate pig-to-human transplants around the world …
Other groups are also getting close … A pig heart could serve—as was hoped for Baby Fae–a bridge until
they can receive a human heart.

[42] 3D printing organs moves a few more steps closer to commercialisation


Jonathan Shieber. Tech Crunch. Aug 12, 19
techcrunch.com/2019/08/11/3d-printing-organs-moves-a-few-more-steps-closer-to-commercialization/

HOW COMFORTABLE ARE YOU WITH THE USE OF BIOTECHNOLOGY IN CROP CULTIVATION?
[43] The first gene-edited food is now being served (extracts)
Megan Molteni. WIRED. Mar 20, 19
www.wired.com/story/the-first-gene-edited-food-is-now-being-served/

Farmers and breeders have been manipulating the DNA of the plants humans eat for millennia. But with
powerful new gene-editing technologies developed over the last five years, scientists can now add or subtract
plant genes with unprecedented precision and speed—leaving first-generation GMOs, along with their stigma
and burdensome regulations, in the dust. Companies big and small have adopted the technology to make
products as disparate as climate change-resistant cacao and extra-starchy corn for adhesives. But last month
Calyxt became the first to commercially debut a gene-edited food, a soybean oil it claims to have made
healthier.
Shoppers can’t yet buy the oil, a product of soybean plants that have been edited to produce fewer saturated
fats and zero trans fats, but Calyxt’s CEO Jim Blome says people are already eating it. The company’s first
client—a restaurant with multiple locations in the Midwest—has begun using the oil to fry, make sauces, and
dress salads, as the Associated Press reported last week. Calyxt describes its oil as having the heart-healthy
fat profile of olive oil without its strong, sometimes grassy flavour. Whether that’s something customers want
remains to be seen. But Calyno, as the oil is known, marks an important moment in the long human history of
messing with plant DNA. It signals the official arrival of foods that have been genetically altered not solely to
make farmers' lives easier, but to make consumers’ tummies (and hearts and other organs) happier.
“Right now the food industry solves all its problems through processing or chemistry,” says Voytas. “We’d like
to do it through genetics and gene-editing.” In addition to its soybean oil, Calyxt is working on wheats with
more fibre and less gluten and potatoes that can safely be put in cold storage without accumulating sugars
that catalyse into cancer-causing chemicals when cooked at high temperatures. (That’s a thing that actually
happens.) The company is also working on developing traits useful to farmers too. When I visited Calyxt last
August, rows of alfalfa plants had just been moved from the greenhouses to test plots outside to make way
for herbicide-resistant soy and canola. But those are in much earlier stages of development. What Calyxt is
really trying to do, according to Voytas, is make it easier for people to have a healthy diet without giving up
the foods they like. “We’d like a piece of Wonder Bread to meet all your daily requirements of fibre,” he adds.
Engineering these novel nutritional attributes starts on the top floor of the Calyxt lab, where its scientists design
gene-editing molecules on computer screens and then have pipetting robots build them. The most well-known
gene editor is Crispr, but Calyxt uses a different set of DNA-cutting enzymes called TALENs. In 2010, Voytas
co-invented the method in his plant genetics lab at the University of Minnesota, where he still spends some of
his time. For a few years, he and his grad students were busy making TALENs for other researchers who
wanted to supercharge their plant gene-tinkering toolbox. “Then Crispr came along and you didn’t really need
the Voytas lab anymore,” he says.
By then, though, he had taken his tech to the French biotechnology firm Cellectis and been installed as chief
27
scientist of its new plant engineering division. Calyxt, as that company is known today, has about 50
employees. Many of them are scientists who work down in the sterile plant-tissue culture labs. There they sort
seeds, transfer embryonic plant cells to agar-filled petri dishes, and deliver the custom-designed TALENs.
Then they douse the cells in root- and leaf-stimulating hormones and let them grow until they become big
enough to punch out a bit of leaf material to sequence and see if the right edit was made. Successfully edited
plants get moved to a brightly lit, temperature-regulated nursery room for further testing before going out to
the greenhouse. Next they might get crossed with other lines that grow better in less controlled environments
or sent straight outside to see how they behave in small plot trials. From the top performing plant, Calyxt will
start saving seeds to eventually sell to farmers.
In 2018, Calyxt contracted with 78 farmers in Iowa, Minnesota, and South Dakota to grow 17,000 acres of its
gene-edited, high oleic soybeans. At the end of the season Calyxt bought back the beans, and had them
crushed into Calyno oil, which it is currently shopping around to more than 40 food companies. The company
tries to find farmers within 100 miles of small, independent crushing facilities that don’t mind halting operations
for a deep-cleaning of their machines to make way for Calyxt’s haul. That’s because—unlike 95 percent of the
80 million or so acres of soybeans planted each year—Calyxt says its crop is “non-GMO.”
So far, US regulators have agreed, saying that as long as a genetic alteration could have been bred in a plant,
meaning you’re not injecting DNA from other reproductively incompatible organisms, it doesn’t require
special oversight. Bayer and DuPont each have their own versions of high oleic soybeans that were made
with conventional genetic engineering techniques, and therefore had to undergo additional safety testing and
environmental assessments. Calyxt’s version won’t be subject to any of that, nor the USDA’s long-
awaited GMO labelling requirements, which it released in December. The standards will require food
companies to label foods that have been “bioengineered” by 2022, but the rule likely won’t apply to gene-
edited foods if they don’t contain foreign DNA.
While critics lambasted these decisions and have called for more regulation of gene-edited foods, companies
are ploughing ahead. Farmers in Montana and North Dakota are growing an herbicide-resistant canola that
was genetically tweaked by Cibus, a plant-editing company based in San Diego. In Massachusetts, Yield10
Bioscience is boosting flax’s omega-3 content; another company called Pairwise has an eye on designer fruits
and veggies; and a third, named Inari, is planning to tailor seeds right down to growing conditions of individual
farms …

WHAT IF SECURITY AND SUSTAINABILITY ARE AT STAKE? ARE WE TOO RELIANT ON SCIENCE/TECHNOLOGY?

[44] Asia needs to invest $1.1 trillion more to feed its people: study (extracts)
Teo Zhuo. The Straits Times. Nov 21, 19
www.straitstimes.com/singapore/asia-needs-to-invest-11-trillion-more-to-feed-its-people-study

A massive US$800 billion (S$1.09 trillion) on top of existing investment levels will be needed for Asia to feed
itself sustainably over the next 10 years … This comes as spending on food in the region is expected to
double to US$8 trillion by 2030, with 250 million more mouths - equivalent to the size of Indonesia's population
- to feed in the same time … more than 800 million people live in hunger today, and the planet's growing
population is set to increase by two billion people in 30 years. On the other hand, crop yields may drop by
25 per cent over the same period and the world has lost a third of its arable land in the last 40 years because
of unsustainable farming. The report also highlighted how Asia has become increasingly dependent on
other regions for food, with net imports of food tripling to around 220 million tonnes annually since the turn
of the century.

As Asia rapidly urbanises and becomes more affluent, governments, investors and firms will have to work
together to create an environment that supports innovation in the agri-food sector, said [a] report by PwC,
Temasek and Dutch multinational Rabobank. Much of this innovation will need the use of new technology,
such as vertical farming, developing alternative sources of protein, and even using artificial intelligence or
drones to improve farming methods …
However, challenges will need to be overcome. These include Asia's diversity - in terms of differences in
regulation, wealth and culture - as well as the region's currently fragmented production and supply chains
… There are several ways technology can help address many [of the] challenges … Big data, sensors and

28
biotech can help with crop yield and nutritional quality, cold chain technology can help keep food fresh, and
tracking technology can provide assurance as to the origins of the food.
WHAT IS ON THE MENU
wDeveloped by the International Rice Research Institute, “scuba rice” can withstand flooding for up to two weeks. Most rice
varieties die within days of being submerged under water. This variety can potentially benefit 49 million acres of rice fields
across Asia susceptible to flooding.
wTo reduce the carbon footprint and increase efficiency, lab-grown meat or plant-based products like the burgers from
wImpossible Foods and Beyond Meat will likely outstrip the overall protein market in terms of growth. Insect-based protein
can also contribute as a key source of feed for aquaculture.
wIndoor farms save space, offer crops protection from the elements, are more sustainable, and produce better yields. Soil-
free methods can use 70-95 per cent less water and vertical farms can increase yield 400-fold. There are already 500 plant
factories across Asia, with over 200 in Japan.
wOne of the world’s first floating closed-containment fish farms was commissioned earlier this week some 5km off Changi
Point Ferry Terminal. The “Eco-Ark” built by Singapore-based Aquaculture Centre of Excellence can produce 166 tonnes of
fish a year – about 20 times more than the minimum production yield set for coastal fish farms in Singapore.

[45] Newsand from processed waste may be used in construction (extracts)


Timothy Goh. The Straits Times. Nov 26, 19
www.straitstimes.com/singapore/newsand-from-processed-waste-may-be-used-in-construction

The Ministry of the Environment and


Water Resources will soon begin a
field trial to assess the real-life
performance of possible Newsand
materials generated from
incineration bottom ash, the thicker
and heavier component of incinerated
ash, and that created from slag, the
by-product of the gasification of solid
waste …
As two-thirds of Singapore is
designated as water catchment
areas, the agency said the
environmental standards for
Newsand have to be sufficiently
stringent to ensure that the material
can be used in any location in
Singapore without compromising the
country's water resources and
environment …
By 2030, [Singapore] wants to send
about one-third less waste to the
[Semakau Landfill] in a bid to help it
last longer than the projected 2035. [At present] about 2,100 tonnes of waste was being sent to the landfill
daily … the current use of Newsand and the upcoming field trials are a culmination of efforts over the years to
turn trash into resources and close Singapore's waste loop … extend the lifespan of [its landfill] and keep it
running for as long as possible … towards becoming a zero-waste nation.

[46] The US government has approved funds for geoengineering research


James Temple. MIT Technology Review. Dec 20, 19
www.technologyreview.com/s/614991/the-us-government-will-begin-to-fund-geoengineering-research/

The US government has for the first time authorized funding to research geoengineering, the
controversial idea that we could counteract climate change by reflecting heat away from the planet. The
$1.4 trillion spending bills that Congress passed this week included a little-noticed provision setting aside at
least $4 million for the National Oceanic and Atmospheric Administration to conduct stratospheric monitoring
and research efforts. The program includes assessments of “solar climate interventions,” including “proposals
to inject material [into the stratosphere] to affect climate.”
President Donald Trump is expected to sign the sweeping appropriations bills today. In a related move,
Congressman Jerry McNerney of California introduced a bill yesterday that would enable NOAA to set up a
29
formal programme to carry out this climate intervention research. The full text of the bill isn't yet available,
and McNerney’s office didn’t immediately respond to inquiries from MIT Technology Review. But the primary
aims would include improving our basic understanding of stratospheric chemistry, and assessing the potential
effects and risks of geoengineering.
The legislation would also grant NOAA oversight authority to review and report on experiments proposed
by other research groups, says Kelly Wanser, an advisor on geoengineering research efforts and executive
director at SilverLining, who consulted with McNerney’s office on details in the bill. A growing number of
academic research groups are exploring various ways to cool the planet as the threat of climate change grows,
including injecting reflective particles into the stratosphere or spraying salt water into the sky to brighten
coastal clouds.
But there are concerns that using such tools could have dangerous environmental side effects, and that even
suggesting them as solutions could ease pressure to cut the greenhouse-gas emissions driving climate
change.
In a statement, McNerney asserted that the federal government should take the lead in this controversial field,
noting that other research efforts are already moving forward. A team of Harvard researchers has been
preparing to conduct one of the first outdoor experiments related to geoengineering, by launching a balloon
that would spray a small quantity of particles into the stratosphere. At least in part because there isn’t a US-
government-funded research programme in place, Harvard took the unusual step of creating its own external
advisory committee to ensure that the researchers work to limit environmental risks, seek outside input, and
operate in a transparent way.
McNerney previously introduced legislation directing the National Academy of Sciences to propose a
geoengineering research agenda and oversight guidelines. It, in turn, established a committee that’s set to
release its recommendations next year.
Since emissions cuts alone likely can’t prevent dangerous levels of climate change, public funding for
geoengineering research is “overdue,” Jesse Reynolds, an environmental law and policy fellow at the
University of California, Los Angeles, said in an email. “We need to know more about solar geoengineering’s
capabilities, limitations, and risks so that future decisions will be informed ones,” he added.

IS THE USE OF SURVEILLANCE AND FACIAL RECOGNITION TECHNOLOGY MORE OF A BOON OR BANE?
[47] 'We are hurtling towards a surveillance state’: the rise of facial recognition
technology (extracts)
Hannah Devlin. The Guardian. Oct 5, 19
www.theguardian.com/technology/2019/oct/05/facial-recognition-technology-hurtling-towards-surveillance-state

Gordan’s wine bar is reached through a discreet side-door … The bar’s Dickensian gloom is a selling point for
people embarking on affairs, and actors or politicians wanting a quiet drink – but also for pickpockets. When
Simon Gordon took over the family business in the early 2000s, he would spend hours scrutinising the faces
of the people who haunted his CCTV footage … When two of Gordon’s friends visited the bar for lunch and
both had their wallets pinched in his presence, he decided to take matters into his own hands. “The police did
nothing about it,” he says. “It really annoyed me.” …
[Gordon’s] frustration spurred him to launch Facewatch, a fast-track crime-reporting platform that allows clients
(shops, hotels, casinos) to upload an incident report and CCTV clips to the police. Two years ago, when facial
recognition technology was becoming widely available, the business pivoted from simply reporting into
active crime deterrence. Nick Fisher, a former retail executive, was appointed Facewatch CEO; Gordon is its
chairman. Gordon installed a £3,000 camera system at the entrance to the bar and, using off-the-shelf
software to carry out facial recognition analysis, began collating a private watchlist of people he had observed
stealing, being aggressive or causing damage. Almost overnight, the pickpockets vanished, possibly put off
by a warning at the entrance that the cameras are in use.
The company has since rolled out the service to at least 15 “household name retailers”, which can upload
photographs of people suspected of shoplifting, or other crimes, to a centralised rogues’ gallery in the cloud.
Facewatch provides subscribers with a high-resolution camera that can be mounted at the entrance to their
premises, capturing the faces of everyone who walks in. These images are sent to a computer, which extracts
biometric information and compares it to faces in the database. If there’s a close match, the shop or bar
manager receives a ping on their mobile phone, allowing them to monitor the target or ask them to leave;
otherwise, the biometric data is discarded. It’s a process that takes seconds … [Fisher] tells me he has signed
a deal with a major UK supermarket chain (he won’t reveal which) and is set to roll out the system across their
stores this autumn. On a conservative estimate, Fisher says, Facewatch will have 5,000 cameras across the
UK by 2022.
The company also has a contract with the Brazilian police, who have used the platform in Rio de Janeiro. “We
caught the number two on Interpol’s most-wanted South America list, a drug baron,” says Fisher, who adds
the system also led to the capture of a male murderer who had been on the run for several years, spotted
30
dressed as a woman at the Rio carnival. I ask him whether people are right to be concerned about the potential
of facial recognition to erode personal privacy. “My view is that, if you’ve got something to be worried about,
you should probably be worried,” he says. “If it’s used proportionately and responsibly, it’s probably one of the
safest technologies today.”
Unsurprisingly, not everyone sees things this way. In the past year, as the use of facial recognition technology
by police and private companies has increased, the debate has intensified over the threat it could pose to
personal privacy and marginalised groups … This summer, the London mayor, Sadiq Khan, wrote to the
owners of a private development in King’s Cross, demanding more information after it emerged that facial
recognition had been deployed there for unknown purposes. In May, Ed Bridges, a public affairs manager at
Cardiff University, launched a landmark legal case against South Wales police. He had noticed facial
recognition cameras in use while Christmas shopping in Cardiff city centre in 2018. Bridges was troubled by
the intrusion. “It was only when I got close enough to the van to read the words ‘facial recognition technology’
that I realised what it was, by which time I would’ve already had my data captured and processed,” he says.
When he noticed the cameras again a few months later, at a peaceful protest in Cardiff against the arms trade,
he was even more concerned: it felt like an infringement of privacy, designed to deter people from
protesting. South Wales police have been using the technology since 2017, often at major sporting and music
events, to spot people suspected of crimes, and other “persons of interest” … Cardiff’s high court ruled that
the trial, backed by £2m from the Home Office, had been lawful. Bridges is appealing, but South Wales police
are pushing forward with a new trial of a facial recognition app on officers’ mobile phones. The force says it
will enable officers to confirm the identity of a suspect “almost instantaneously, even if that suspect provides
false or misleading details, thus securing their quick arrest”.
The Metropolitan police have also been the subject of a judicial review by the privacy group Big Brother Watch
and the Green peer Jenny Jones, who discovered that her own picture was held on a police database of
“domestic extremists”. In contrast with DNA and fingerprint data, which normally have to be destroyed within
a certain time period if individuals are arrested or charged but not convicted, there are no specific rules in
the UK on the retention of facial images. The Police National Database has snowballed to contain about
20m faces, of which a large proportion have never been charged or convicted of an offence. Unlike DNA and
fingerprints, this data can also be acquired without a person’s knowledge or consent. “I think there are
really big legal questions,” says Silkie Carlo, director of Big Brother Watch.
“The notion of doing biometric identity checks on millions of people to identify a handful of suspects is
completely unprecedented. There is no legal basis to do that. It takes us hurtling down the road towards a
much more expansive surveillance state.”
Some countries have embraced the potential of facial recognition. In China, which has about 200m
surveillance cameras, it has become a major element of the Xue Liang (Sharp Eyes) programme, which ranks
the trustworthiness of citizens and penalises or credits them accordingly. Cameras and checkpoints have
been rolled out most intensively in the north-western Xinjiang province, where the Uighur people, a Muslim
and minority ethnic group, account for nearly half the population. Face scanners at the entrances of shopping
malls, mosques and at traffic crossings allow the government to cross-reference with photos on ID cards to
track and control the movement of citizens and their access to phone and bank services.
At the other end of the spectrum, San Francisco became the first major US city to ban police and other
agencies from using the technology in May this year, with supervisor Aaron Peskin saying: “We can have
good policing without being a police state.” Meanwhile, the UK government has faced harsh criticism from its
own biometrics commissioner, Prof Paul Wiles, who said the technology is being rolled out in a “chaotic”
fashion in the absence of any clear laws. Brexit has dominated the political agenda for the past three years;
while politicians have looked the other way, more and more cameras are being allowed to look at us.
Facial recognition is not a new crime-fighting tool … However, in the past three years, the performance of
facial recognition has stepped up dramatically … The rapid acceleration is thanks, in part, to the goldmine of
face images that have been uploaded to Instagram, Facebook, LinkedIn and captioned news articles in the
past decade … By 2016, Microsoft had published a dataset, MS Celeb, with 10m face images of 100,000
people harvested from search engines – they included celebrities, broadcasters, business people and anyone
with multiple tagged pictures that had been uploaded under a Creative Commons licence, allowing them to
be used for research. The dataset was quietly deleted in June, after it emerged that it may have aided the
development of software used by the Chinese state to control its Uighur population … In parallel, hardware
companies have developed a new generation of powerful processing chips, called Graphics Processing Units
(GPUs) … The combination of big data and GPUs paved the way for an entirely new approach to facial
recognition, called deep learning, which is powering a wider AI revolution. “The performance is just
incredible,” says Maja Pantic, research director at Samsung AI Centre, Cambridge, and a pioneer in computer
vision. “Deep [learning] solved some of the long-standing problems in object recognition, including face
recognition.” …
The performance of facial recognition software varies significantly, but the most effective algorithms available
… very rarely fail to match faces using a high-quality photograph. There is far less information, though, about

31
the performance of these algorithms using images from CCTV cameras, which don’t always give a clear view.
Recent trials reveal some of technology’s real-world shortcomings … In general, Pantic says, the public
overestimates the capabilities of facial recognition … Her own team has developed, as far as she is aware,
the world’s leading algorithm for learning new faces, and it can only store the information from about 50 faces
before it slows down and stops working …
Concerns have been raised that facial recognition has a diversity problem, after widely cited research by
MIT and Stanford University found that software supplied by three companies misassigned gender in 21% to
35% of cases for darker-skinned women, compared with just 1% for light-skinned men. However, based on
the top 20 algorithms, Nist found that there is an average difference of just 0.3% in accuracy between
performance for men, women, light- and dark-skinned faces. Even so, says Carlo of Big Brother Watch, the
technology’s impact could still be discriminatory because of where it is deployed and whose biometric data
ends up on databases …
Debates about civil liberties are often dictated by instinct: ultimately, how much do you trust law enforcement
and private companies to do the right thing? When searching for common ground, I notice that both sides
frequently reference China as an undesirable endpoint. Fisher thinks that the recent disquiet about facial
recognition stems from the paranoia people feel after reading about its deployment there. “They’ve created
digital prisons using facial recognition technology. You can’t use your credit card, you can’t get a taxi, you
can’t get a bus, your mobile phone stops working,” he says. “But that’s China. We’re not China.”
Groups such as Liberty and Big Brother Watch say the opposite: since facial recognition, by definition, requires
every face in a crowd to be scanned to identify a single suspect, it will turn any country that adopts it into a
police state. “China has made a strategic choice that these technologies will absolutely intrude on people’s
liberty,” says biometrics commissioner Paul Wiles. “The decisions we make will decide the future of our social
and political world.”
For now, it seems that the question of whether facial recognition will make us safer, or represents a new kind
of unsafe, is being left largely to chance. “You can’t leave [this question] to people who want to use the
technology … it should be parliament.” … anonymity is no longer guaranteed. Facial recognition gives police
and companies the means of identifying and tracking people of interest, while others are free to go about their
business. The real question is: who gets that privilege?

[48] Emotion [49] France set to roll [50] AI can read your
recognition is China's out nationwide facial emotions. Should it?
new surveillance recognition ID Tim Lewis. The Guardian.
craze. programme. Aug 17, 19
The Straits Times. Nov 7, 19 The Straits Times. Oct 3, 19
www.straitstimes.com/asia/east-asia/emotion-recognition-is-chinas-new-surveillance-craze
www.straitstimes.com/world/europe/france-set-to-roll-out-nationwide-facial-recognition-id-programme
www.theguardian.com/technology/2019/aug/17/emotion-ai-artificial-intelligence-mood-realeyes-amazon-facebook-emotient

WOULD THE GAINS FROM USING AI OUTSTRIP THE COSTS? HOW DO WE AMPLIFY THE FORMER AND MINIMISE THE LATTER?

[51] The age of artificial intelligence: cities and the A.I. edge
Irene Tham. The Straits Times. Aug 11, 19
www.straitstimes.com/tech/cities-and-the-ai-edge

In Padang, West Sumatra, San Francisco-based non-profit organisation Rainforest Connection is mounting
used cellphones on trees to detect sounds that originate from chainsaws or trucks belonging to illegal loggers.
Rangers, villagers and law enforcement agencies are then alerted to the illegal activities and can take action.
In Singapore, DBS Bank is predicting when employees will quit, so management can intervene and retain
staff. In Taipei, Taiwan's performing arts centre National Theatre and Concert Hall is using technology to
provide automatic sub-titling so that people with hearing disabilities can also enjoy performances.
What unites the three cities in their cutting-edge exploits is a new frontier technology known as artificial
intelligence (AI). Tipped to power the fourth industrial revolution, AI is a technique that allows machines to
learn from enormous sets of data. In itself, the technique is not new. The first working programme was written
in 1951, allowing humans to play chess and checkers with the world's first commercially available computer -
the Ferranti Mark 1 created by British firm Ferranti. AI is now able to do a lot more, and at a quicker pace,
allowing cities and the way people live and work in them to be shaped. The Singapore Government, for one,
is testing the use of AI to ease traffic congestion, deter fraud in claims for government funding and detect
potential drownings in public pools.
So what has changed since the 1950s? "It is the increasing power of computation that allows an enormous
amount of data to be processed in milliseconds," said retired Israeli major-general Isaac Ben-Israel, who is in
charge of developing his country's national AI strategy. For instance, 200,000 hours of the sounds of the ocean
can be analysed in hours to detect humpback whales and chart their migration patterns. It would take years
32
to do the analysis without AI software powered by fast computers. Professor Ben-Israel also sits on the board
of Singapore's Agency for Science, Technology and Research (A*Star), and helps to chart Singapore's overall
research and development direction.
Over the past five years, big American tech firms Google, Amazon and Apple as well as China tech giants
Huawei and Alibaba have contributed to quantum leaps in computing power by developing their own AI chips
to allow for faster and cheaper data processing for machine learning, much of which takes place on
systems hosted on the Internet. Machine learning is a key part of AI, allowing specific tasks to be done by
way of inference in the absence of explicit instructions and computing rules. As a result, AI researchers can
rent online computing power for 10 hours for US$50 (S$69) or less per computer to process data, cutting the
need to own their own super machines. Access to cheap computing resources and free open-source tools
has allowed engineering students in Delhi, India, to develop an app to let people assess air quality in real-
time, simply by snapping a picture with their mobile phones.
AI is also fast becoming the defining competitive advantage for cities, said experts. "It is an important
competitive advantage as AI helps to improve air quality and public transport, which will make a city more
liveable... These are in addition to traditional infrastructure like mobile networks and electricity," said Mr Sherif
Elsayed-Ali, director of partnerships at Canada-based software firm Element AI. Mr Jeff Dean, head of AI at
Google, added: "There will be a lot more opportunities to do things differently and do more things than you
can do today." For instance, Bali-based Gringgo Indonesia Foundation is working with Google to develop an
AI-enabled application to help human-waste collectors better identify recyclables by snapping a picture on
their phones.
But as with all new technologies, there is some level of mistrust. For example, in "passive data collection",
people out in public are having their bodies and faces scanned without their consent by security cameras for
law enforcement purposes. "If misused, then it will erode a lot of trust in such technologies," Mr Elsayed-Ali
said. In China, home to the world's largest network of surveillance cameras and gait recognition technology
for law enforcement, not many have raised concerns. But San Francisco has earlier this year banned the use
of facial scanning for administrative efficiencies or public safety, to prevent potential abuse, the city said.
Each city's level of tolerance and trust varies, an ongoing survey conducted by Switzerland-based business
school the International Institute for Management Development (IMD) and the Singapore University of
Technology and Design has found. "In Chongqing and Bengaluru, people are 100 per cent comfortable, but
people in Boston and Amsterdam do not want their faces scanned. People in Singapore and Dubai have mixed
feelings," said Dr Bruno Lanvin, president of the Smart City Observatory at IMD.
The push for more privacy has led to a movement for federated machine learning, where AI machines are
trained using millions of mobile devices, without extracting raw data from the devices. Google's TensorFlow
and Facebook's PyTorch are two of the world's most popular machine learning frameworks built to deploy
federated machine learning. They power predictive spelling on virtual keyboards that can "learn" new words,
and virtual assistants that can converse with humans.
Singapore's Senior Minister of State for Communications and Information and Transport Janil Puthucheary
said that new technologies - whether they are chemical or biomedical engineering or aviation - bring with them
risks as well as opportunities, and have subjected humanity to ethical binds before. Dr Janil, who is also
minister-in-charge of GovTech, the agency behind the Singapore public sector's technology transformation,
said people need to accept that there are profit motives as well as social responsibility. He added: "AI is
already here and we are already on this path. We need to have a little bit of faith in humanity that we can
deliver on this."

[52] 'Guardians' [53] Agri-tech [54] Voice-to-text [55] Making [56] Helping [57] Staff
in Padang send solutions reduce app helps make driving safer for seniors live planning to quit?
alerts when guesswork for the arts inclusive disabled, elderly longer, healthier System in
forests face farmers in in Taipei. in Seoul. in Tokyo. Singapore can
threats. Lucknow. tell.
www.straitstimes.com/asia/guardians-send-alerts-when-forests-face-threats www.straitstimes.com/tech/making-driving-safer-for-disabled-elderly
www.straitstimes.com/tech/agri-tech-solutions-reduce-guesswork-for-farmers www.straitstimes.com/asia/east-asia/helping-seniors-live-longer-healthier
www.straitstimes.com/tech/voice-to-text-app-helps-make-the-arts-inclusive www.straitstimes.com/tech/staff-planning-to-quit-system-can-tell

33
HOW WILL RAPID AND EXTENSIVE ADOPTION OF AI AND ROBOTICS SHAPE WORK PROSPECTS AND JOB SECURITY?

[58] Robots to wipe out 20 million jobs around the world by 2030: Study
Chong Koh Ping. The Straits Times. Jun 26, 19
www.straitstimes.com/tech/robots-to-wipe-out-20-million-jobs-around-the-world-by-2030-study

Up to 20 million manufacturing jobs will be lost globally to robots by 2030, a new study has found. And the
displacement of jobs will not be evenly spread around the world, or within countries, according to the study
published on Wednesday (June 26) by Oxford Economics, a UK-based research firm. Lower-skilled regions,
which tend to have weaker economies and already-high unemployment rates, are much more vulnerable to
the job losses, it said after surveying seven economies including the United States, Germany, Britain, France,
Japan, South Korea and Australia. Since 2000, some 1.7 million manufacturing jobs have already been lost
to robots, including around 400,000 in Europe, 260,000 in the US, and 550,000 in China.
The study noted that the rate at which robots were replacing jobs had been rising steadily, with the global
stock of industrial robots more than doubling since 2010. "The robotics revolution is rapidly accelerating, as
fast-paced technological advances converge. The result will transform what robots can do over coming
decades - and their ability to take over tasks that humans do now," said Mr James Lambert, one of the lead
authors of the study and Director of Economic Consulting for Asia at Oxford Economics. He added: "The
number of robots is also set to multiply rapidly. We expect the number in use to reach 20 million by 2030 -
about 10 times the number now."
The authors observed that the centre of gravity in the world's robot stock has shifted towards new
manufacturers, mainly in China, South Korea and Taiwan but also to India, Brazil and Poland. About one in
three robots worldwide is now installed in China, which accounts for around one-fifth of the world's total stock
of robots, up from just 0.1 per cent in 2000. By 2030, China could have as many as 14 million industrial robots
in use, dwarfing the rest of the world's stock of them. In contrast, the combined robot inventory of the US and
Europe has fallen to under 40 per cent of the global share from its peak of close to 50 per cent in 2009. And
Japan - formerly the world leader in automation - has reduced its active stock of robots by around 100,000
units since 2000.
The study predicted that the use of robots in services industries would accelerate sharply in the next five
years, fuelled by advances in artificial intelligence (AI), machine learning, and engineering. This would
particularly affect the logistics sector but should spread to other industries including healthcare, retail,
hospitality and transport, it said. "The implications are huge. We will see a significant boost to productivity and
economic growth and some new types of job we can't even yet foresee," said Mr Lambert.
The report predicted that a 30 per cent rise in robot installations above its baseline forecast for 2030 would
add US$4.9 trillion to the global economy that year, equivalent to an economy greater than the projected size
of Germany's in that year. "But at the same time business models will be disrupted or upturned and millions
of existing workers will be displaced - and the impact will affect lower-skilled and poorer economies and
regions most," he cautioned. "Governments, policymakers, business and individuals need to think hard now
about this wave of tech-driven change and we all need to prepare for what amounts to a new industrial
revolution," he added.
When asked about the impact of robots in Singapore, Mr Lambert said it was well positioned to benefit from
this new generation of robotics as it has a modern and upgradeable infrastructure, a supportive regulatory
framework and a strong investment environment. "Those workers in Singapore that are displaced by
technology will have to adapt their skills to the evolving demands of the future economy but the government
already has put in place schemes to help to retrain workers displaced by technology," he said. "Singapore
also has an ageing population (more so than most) and restraints on inward migration, so robots may be
particularly helpful in keeping the economy growing," Mr Lambert noted.

[59] These American workers are the most afraid of A.I. taking their jobs
Jacob Douglas. CNBC. Nov 7, 19
www.cnbc.com/2019/11/07/these-american-workers-are-the-most-afraid-of-ai-taking-their-jobs.html

The Terminator movie franchise is back, and the idea that robots and artificial intelligence are coming for us
— specifically, our jobs — is a big part of the present. But the majority of the working population remains
unafraid of a T-800 stealing their employment. Only a little over one-quarter (27%) of all workers say they are
worried that the job they have now will be eliminated within the next five years as a result of new technology,
robots or artificial intelligence, according to the quarterly CNBC/SurveyMonkey Workplace Happiness survey.
Nevertheless, the survey results show it may be only a matter of time:
Fears about automation and jobs run higher among the youngest workers. The survey found that 37%
of workers between the ages of 18 and 24 are worried about new technology eliminating their jobs. That’s
nearly 10% higher than any other demographic. Dan Schawbel, research director of Future Workplace and
author of “Back to Human,” said one reason for the age-based fear gap is because technology, like AI, is
34
becoming normalized. “They are starting to see the value of
[AI] and how it’s impacting their personal and professional
lives,” Schawbel said. “We’re using AI without even thinking
about it. It’s a part of our lives. If you are talking to Siri or Alexa,
that’s AI.”
Laura Wronski, senior research scientist at SurveyMonkey,
said, “As digital natives, [18- to 24-year-old workers]
understand the potential of technology to have a positive
impact. But with 30 or 40 years left in the workforce, they likely
envision vast potential changes in the nature of work over the
course of their lifetime.”
The survey also revealed a link between income and fear, with
34% of workers making $50,000 or under afraid of losing their
jobs due to technology; that goes down to 16% among
workers making between $100,000 and $150,000, and 13%
for workers making $150,000 or more.
In some industries where technology already has played
a highly disruptive role, worker fears of automation also run
higher than the average: Workers in automotives, business
support and logistics, advertising and marketing, and retail are
proportionately more worried about new technology replacing
their jobs than those in other industries. Forty-two percent of
workers in the business support and logistics industry have
above-average concerns about new technology eliminating
their jobs. Schawbel said that fear stems from the fact that the industry is already seeing it happen. Self-driving
trucks already are threatening the jobs of truck drivers, and it is causing massive panic in the profession, he
said.
“There is a fear, with some research to back it up, that it’s going to be hard to retrain and retool truck drivers
to take on other jobs,” Schawbel said. “You know with a truck driver you can just eliminate the truck driver,
whereas with professionals doing finance or accounting, certain tasks that they do can be automated, but they
have a little more flexibility to do other tasks that could be more valuable.”
Elmer Guardado, a 22-year-old account coordinator at Buie & Co. Public Relations, fits two demographics that
are more likely to worry about new technology replacing them: he is young, and he is in the advertising and
marketing industry. But he remains convinced that human skills will set him apart from the automated
competition. “It’s not something I’m actively worried about,” Guardado said. “Because I know there are so
many parts of my job that require a level of nuance that technology won’t be able to replace anytime
soon.”
Guardado says that his communication skills are a valuable asset that he brings to the workplace that a
computer can’t compete with quite yet. But he also understands why his peers may be more afraid than other
age groups. “I think older generations maybe process this potential fear in a more abstract way,” Guardado
said. “Whereas 18- 24-year-olds see it first-hand, right? We actively dealt with it growing up and saw
technology consistently skyrocket throughout our entire lifetime.”
The survey found a fairly optimistic view on the future of AI, with nearly half of workers (48%) saying the quest
to advance the field of artificial intelligence is “important.” Only 23% called it “dangerous.” They remain more
worried about their own kind: 60% of workers said that human intelligence is a greater threat to humanity than
artificial intelligence. Sixty-five percent of survey respondents said computer programmes will always reflect
the biases of the people who designed them.

HOW WORRIED ARE YOU THAT MANKIND MAY BE ON A TRAJECTORY TOWARDS TECHNOLOGICAL SINGULARITY?

[60] How AlphaZero has rewritten the rules of game play on its own
Will Knight. MIT Technology Review. Feb 22, 19
www.technologyreview.com/s/612923/how-alphazero-has-rewritten-the-rules-of-gameplay-on-its-own/

David Silver invented something that might be more inventive than he is. Silver was the lead researcher on AlphaGo, a computer
programme that learned to play Go—a famously tricky game that exploits human intuition rather than clear rules of play—by studying
games played by humans. Silver’s latest creation, AlphaZero, learns to play board games including Go, chess, and Shogi by practising
against itself. Through millions of practice games, AlphaZero discovers strategies that it took humans millennia to develop. So could
AI one day solve problems that human minds never could? I spoke to silver at his London office at DeepMind, now owned by
Alphabet.
In one famous game against possibly the best Go player ever, AlphaGo made a brilliant move that human observers initially
35
thought was a mistake. Was it being creative in that moment?
“Move 37,” as it became known, surprised everyone, including the Go community and us, its makers. It was
something outside of the expected way of playing Go that humans had figured out over thousands of years.
To me this is an example of something being creative.
Since AlphaZero doesn’t learn from humans, is it even more creative?
When you have something learning by itself, that’s building up its own knowledge completely from scratch, it’s
almost the essence of creativity. AlphaZero has to figure out everything for itself. Every single step is a creative
leap. Those insights are creative because they weren’t given to it by humans. And those leaps continue until
it is something that is beyond our abilities and has the potential to amaze us.
You’ve had AlphaZero play against the top conventional chess engine, Stockfish. What have you learned?
Stockfish has this very sophisticated search engine, but at the heart of it is this module that says, “According
to humans, this is a good position or a bad position.” So humans are really deeply in the loop there. It’s hard
for it to break away and understand a position that’s fundamentally different. AlphaZero learns to understand
positions for itself. There was one beautiful game we were just looking at where it actually gives up four pawns
in a row, and it even tries to give up a fifth pawn. Stockfish thinks it’s winning fantastically, but AlphaZero is
really happy. It’s found a way to understand the position which is unthinkable according to the norms of chess.
It understands it’s better to have the position than the four pawns.
Does AlphaZero suggest AI will play a role in future scientific innovation?
Machine learning has been dominated by an approach called supervised learning, which means you start off
with everything that humans know, and you try to distil that into a computer programme that does things in
just the same way. The beauty of this new approach, reinforcement learning, is that the system learns for
itself, from first principles, how to achieve the goals we set it. It’s like a million mini-discoveries, one after
another, that build up this creative way of thinking. And if you can do that, you can end up with something that
has immense power, immense ability to solve problems, and which can hopefully lead to big breakthroughs.
Are there aspects of human creativity that couldn’t be automated?
If we think about the capabilities of the human mind, we’re still a long way away from achieving that. We can
achieve results in specialized domains like chess and Go with a massive amount of computer power dedicated
to that one task. But the human mind is able to radically generalise to something different. You can change
the rules of the game, and a human doesn’t need another 2,000 years to figure out how she should play. I
would say that maybe the frontier of AI at the moment—and where we’d like to go—is to increase the range
and the flexibility of our algorithms to cover the full gamut of what the human mind can do. But that’s still a
long way off.
How might we get there?
I’d like to preserve this idea that the system is free to create without being constrained by human knowledge.
A baby doesn’t worry about its career, or how many kids it’s going to have. It is playing with toys and learning
manipulation skills. There’s an awful lot to learn about the world in the absence of a final goal. The same can
and should be true of our systems.

“ALL THAT GLITTERS IS NOT GOLD.” DOES THIS DESCRIBE DISRUPTIVE TECHNOLOGY?

[61] Ensuring economic security in the gig economy (extracts)


Giuliano Bonoli. Business Times. Mar 13, 19
www.businesstimes.com.sg/opinion/ensuring-economic-security-in-the-gig-economy

One of the most disruptive ways in which technology is changing the world of work is without doubt the so-
called "gig" economy, where unlike in the traditional firm-based model, work is not performed by employees,
but by freelancers (or "giggers") who are hired only for the time it takes to perform specific tasks … There
are, at least, two different types of gig work. The best-known type is on-demand work, facilitated by Internet-
based apps and platforms that connect service sellers and buyers in local markets(for example, Uber,
TaskRabbit, Handy.com, Care.com and so forth) … But there is also crowdwork, which can be performed by
workers based anywhere in the world as long as they have access to a computer. They execute simple tasks
for which they are paid usually small amounts, typically a few cents per task. One of the best-known
crowdwork platforms is Amazon's Mechanical Turk. The tasks performed by crowdworkers are called "human
intelligence tasks" and cannot be outsourced to machines, like filling in psychology questionnaires or training
machine learning algorithms in face recognition.
What are the consequences of such disruptive technological change for workers? The debate on the gig
economy provides us with various scenarios. The digital optimists believe that not only consumers, but also
workers, will benefit from the increased flexibility and empowerment that come with being self-employed,
such as having greater control over work schedules, clients, jobs and projects. In fact, this vision is very far
from reality for most gig workers. Take Uber drivers: if they want to obtain a reasonable income, they have
to drive around for several hours every day, often just waiting for a client to call.

36
Socially minded gig economy gurus and executives, above all in America, propagate the optimistic view of the
gig economy. The narrative is often one of new forms of work providing a kind of last resort safety net for
poor people - a safety net that the state is no longer capable or willing to guarantee. The Internet is full of
quotes from new economy big names who argue this. David Plouffe, who ran the 2008 presidential campaign
for Barack Obama and is a now a strategic advisor to Uber, says: "We're discovering that platforms like Uber
are boosting the incomes of millions of American families. They're helping people who are struggling to pay
the bills, earn a little extra spending money, or transitioning between jobs." Stacy Brown-Philipot, the CEO of
TaskRabbit who grew up in a disadvantaged Detroit neighbourhood, sees the gig economy as an opportunity
to "create everyday work for everyday people". Could the gig economy really be replacing the fading welfare
state?
Then there are the pessimists who see the kind of work provided by Internet-based platforms as insecure
and alienating. In their eyes, gig work represents the ultimate stage in the casualisation of work and the
destruction of workers' rights. From this perspective, gig economy workers are like employees without
workers' rights.
As always, reality is in the middle. Platforms like Uber or TaskRabbit do provide access to income streams to
people who would find it extremely difficult to compete in the regular labour market. This is, above all, the case
for unskilled workers. But having access to an income stream is not enough. Freelancing of the kind that
takes place through Internet-based platforms has two main shortcomings. The first one is the lack of short-
term income security. Revenues for giggers can be unpredictable. Some level of short-term economic
security is essential to be able to lead a "normal" life, which by most definitions would include having a family,
renting or buying accommodation, and planning for the future.
The second problem is income security in the long term. For employees, this security is provided by social
insurance. When working, employees and their employers pay social contributions that entitle workers to an
income stream if they are unable to work due to social risks such as unemployment, disability or old age. To
some extent, access to long-term income security is a matter of earnings. High skilled, high-earning
freelancers can buy insurance cover on the market for all the social risks mentioned above.
The gig economy, like most disruptive technological innovations, is both an opportunity and a threat for
social cohesion and stability. It is an opportunity because it is inclusive by nature. There are very few
obstacles for motivated individuals who want to participate in it. It is a threat because the rewards it offers for
participation are very far away from what we consider as a socially acceptable way of living in advanced
economies. Society, governments and firms still need to find a balance between flexibility and security
around this disruptive but very efficient solution to create markets for talent.
Unsurprisingly, there is little consensus on how to provide a reasonable level of economic security to people
working in the gig economy. Some argue that enforcing existing legislation would be enough, and giggers
should simply be considered as employees and receive employee rights. This solution is often favoured by
the trade unions and governments. However, it is not applicable everywhere. First, some giggers are true
freelancers who chose to be self-employed and who work for several clients, so that there is no legitimate
reason for governments to reclassify them as employees. Second, forcing standard employee status on gig
economy workers may destroy the business model, which is based on low-cost and low-priced services. This
would mean denying access to work and income to those who are profiting from the opportunities provided by
the gig economy.
We may need more innovative solutions. In its 2019 report on the changing nature of work, the World Bank
argues that "traditional provisions of social protection based on steady wage employment, clear definitions of
employers and employees, and a fixed point of retirement are becoming increasingly obsolete". Let us not
forget that the social insurance systems in place today are a legacy of the early days of industrialisation. Social
insurance was invented in 19th-century Germany by Otto von Bismarck and it is at least questionable whether
this model is suitable to today's working world.
The idea that social protection needs to be adapted to the changing nature of work is now firmly
embedded in public debates. The most striking proof of this is the return on the scene of an old idea: a
universal basic income (UBI) paid unconditionally to every citizen. The idea has gained renewed popularity
precisely in relation to fears of robots taking over more and more jobs, and work becoming ever more
precarious. Another alternative would be to improve the financial support given to households with
children. In fact, the uncertainty due to income fluctuations is especially a problem for families. Adult-only
households are much more able to cope and adapt. Many OECD (Organisation for Economic Co-operation
and Development) countries provide child allowances, that is, benefits that are paid to households with
dependent children. These could be strengthened and connected to the debate on basic income.
These debates show that the disruptive technological change and the transformation of work are generating
new ideas with regard to the reform of the social protection systems that we have inherited from the industrial
economies of the post-war years. The challenge before us is to preserve the high levels of social cohesion
and economic security achieved in the past in this new emerging economic and technological world.
Obviously, the implications of this challenge go beyond the narrow field of social policy. Preserving some form

37
of social cohesion is essential in modern democracies. Otherwise, those who feel left behind tend to turn to
anti-system political parties with unpredictable consequences for the preservation of our prosperity and way
of life.
The occupational structure is changing and society needs to understand that, given the surplus of low-
skilled labour in all advanced economies, a substantial effort will have to be made to support the living
standards of those who cannot participate in mainstream wealth creation processes. This has to be
done intelligently, that is, in a way that preserves work incentives while at the same time protecting the
weakest members of the workforce from poverty and exclusion.
The technological transformations that we are witnessing in recent years have the potential to produce
enormous gains in terms of quality of life. The dark side is polarisation and a very real risk of exclusion for
some workers. Public policy must make sure that society as a whole will profit from this tremendous
opportunity and that it will do so in an inclusive way.

[62] 5 types of gig economy workers (extracts)


Yuen Sin. The Straits Times. Nov 17, 19
www.straitstimes.com/singapore/5-types-of-gig-economy-workers
www.straitstimes.com/singapore/manpower/gig-work-not-an-easy-ride

The vulnerable: only employment option for some


After being released from prison for criminal breach of trust offences in 2016,
Mr Raymond Chia, 48, a former restaurant manager, found it difficult to land a
full-time job … He started delivering food on a personal mobility device with
Uber's food delivery arm UberEats in 2016, and then for GrabFood after Grab's
acquisition of Uber last year. For working 14 hours a day, seven days a week,
he earns $3,000 a month - more than the $2,000 he used to earn as a restaurant
manager.
About one in five of the 50 private hire drivers and delivery riders
polled by Insight is an individual from a vulnerable background,
like Mr Chia. Besides former offenders, this group includes single
parents, the elderly poor, and those with health issues or special
needs. Most surveyed by Insight have low educational
qualifications of just O or N levels and below, with one without
any schooling. For this segment, a job in the gig economy is often
the only option for them, given the low barriers to entry and
flexible arrangements that they require to cope with caregiving
arrangements or health conditions …
Labour economist and Nominated MP Walter Theseira says this
group of workers is probably, on balance, helped by the gig
economy. "Prior to that, it would have been difficult for them to
find and keep regular employment," he says. But National
University of Singapore sociologist Tan Ern Ser warns that this
group may be at risk of staying at the bottom permanently,
with low prospects of upward social mobility. Few in this group
are going for skills upgrading, according to the Insight street poll,
though many hope to secure better-paying jobs in the long term.
Mr Mohd Anuar Yusop, executive director of non-profit body AMP
Singapore, says if current initiatives take off, many vulnerable
workers' needs for flexible and inclusive workplace environments
can be met in the future. He cites the Enhanced Work-Life Grant
for flexible work arrangements to accelerate the pace of reforms
in workplace design, culture and human resource practices.
The drifters: focusing on flexibility over career prospects
At a time when his peers are reaching their career peak, Mr Muhd Firdaus, who
is on the cusp of turning 40, has been working as a food delivery rider for the
past seven years. He has a diploma in hotel management and worked as a
hotel concierge for three years before deciding to quit to deliver food for
Foodpanda and Deliveroo instead. Prior to working as a hotel concierge, he
was a regular serviceman in the army for a decade. "It was tiring as an NS
(national service) regular, so I did not extend my contract. Working as a
concierge is also not easy because you have to sit through 10-hour shifts," says
Mr Firdaus, who brings in about $3,000 a month now. "This job is flexible. I
wanted to have more time to spend with my family," says Mr Firdaus, who has

38
three children - five-year-old twins and a son, eight. His wife earns about $2,000 a month as a nurse. But their dual income is barely
enough to make ends meet, as they also support and live with his ageing parents in a four-room flat. Mr Firdaus works eight hours a
day, five to six days a week, from 9.30am to 3pm and 6pm to 10.30pm. In between, he goes home to rest and spend time with his
young children. He earns about the same pay as he did in his hotel concierge job, but takes home more cash now as there is no
Central Provident Fund (CPF) monthly deduction.
[Mr Firdaus] belongs to a group of workers who have some educational qualifications and are eligible to take
on other jobs, such as in the security, service or logistics line, and which come with a career or a progressive
wage ladder, but who prefer to work in the gig economy. [Theseira] says the issues faced by this group of
workers may be a problem of economic structure. "Why is it that the gig economy jobs, which require no
formal skills or training, end up paying more (or being more attractive) than positions which do require skills
and experience? I view it as a call to study why wages are weak in jobs that require skills and experience, if a
gig job can pay better," he says … [He] says he is concerned that gig workers may be looking only at take-
home income and not gross income, which includes CPF contributions. "Without CPF contributions, it is very
difficult for low-income workers to ever afford a flat, or to pay for their healthcare and retirement," he adds. "I
consider the CPF differential in take-home wages a distortion to the economy and something that should be
addressed with policy, because it also harms the workers - not today, but definitely in five, 10 years."
The now-whats: 'quick fix' becomes permanent job
He may be only 23, but Mr Zane Chiang has already experienced one retrenchment. Three years ago, the Japanese restaurant where
he had worked for five years as a waiter felt the squeeze from high rents and closed down. Being unemployed was a shock after
working 13-hour days. Yet, when Mr Chiang went for about five interviews for events and office jobs, he was unsuccessful. So he
decided to be a food delivery rider to earn some money and pass time in the interim.
More than half of 50 food delivery riders and private hire drivers Insight interviewed during a street poll say
they became gig workers as a "quick fix" after an unforeseen crisis such as losing a job, or simply to save
for a rainy day or buy items. According to a gig economy survey of 200 food delivery workers and private hire
drivers commissioned by Insight, 79 per cent say they have been working for two years or less … Labour MP
Ang Hin Kee, who is the assistant director-general of the National Trades Union Congress, says: "A lot of
these workers started off doing this because they were going through a transition - no one will say they
want to be a delivery worker or driver, or that they meant for it to be a permanent job." This corresponds to
findings from the survey of 200 workers that three-quarters of them are part-timers. However, Mr Ang says a
number do end up doing gig work full time. "The business incentive model of such platforms becomes so
attractive that it shapes worker behaviour and they find themselves working longer hours and staying on
with the job permanently," says Mr Ang. "The job is attractive because if they hit a certain number of trips,
they can get a higher level of incentives. They also get the cash immediately, instead of having to contribute
some of it to CPF (Central Provident Fund)," he adds …
For people like Mr Chiang who find themselves in it for the long haul, Mr Ang hopes gig economy platforms
will exercise more responsibility to give workers more benefits and protection. "Since they are operating in
Singapore, they need to comply with responsible employment conditions," he says. "They can do more,
such as co-funding part of a training fund, helping to defray rental costs while workers go for the training, or
offering health screenings. They may have an interest to train their workers in other skill sets as they are also
diversifying to offer various types of products and services, and these workers can be tapped for that."
The moonlighters: taking on extra work to meet expenses
For some workers, holding a full-time job may not be enough to cover their basic needs such as food or
education expenses. They turn to the gig economy to plug the gap, working at night after their day jobs and
during days off.
Dental assistant Rachel Galvez, 34, for instance, started delivering food with GrabFood on a personal mobility device earlier this
month on her days off, to get extra money for daily expenses. Her husband is a security officer and their combined monthly take-home
pay is about $3,200. They have three children aged two to 10 years old. Besides having to pay for a maid to look after the children,
they find it difficult to get by as her husband has to make alimony payments to his former wife. He also has to support a child from his
first marriage, which is an additional expense of about $500 every month. Once or twice a week, she delivers food with GrabFood for
10 to 11 hours a day. Her husband has also been working part-time with Deliveroo and GrabFood for the past year. This adds about
$500 to $1,000 to their household income …
Associate Professor Irene Ng of the NUS' Department of Social Work says the long hours these people put
into gig work in their free time leave them with no time or resources to upgrade their skills. Ang Mo Kio
GRC MP Ang Hin Kee, who is assistant director-general of the National Trades Union Congress, says it is
worrying if people with other skill sets who are juggling gig jobs with a full-time career are lured by the short-
term benefits of gig jobs and move away from the industries or skills that they have been trained in. Mr Mohd
Anuar Yusop, executive director of non-profit body AMP Singapore, says social organisations should study
how many who fall into this category are from the sandwiched group. These are people with young and elderly
dependants whose income from full-time work is just above the eligibility criteria for most assistance schemes,
but who nevertheless have to incur large expenses to meet needs such as the medical expenses of an elderly
parent with illness or disability.

39
"Institutions should be ever ready to look into the cases of those who have fallen through the cracks," he says.
In cases where a family has trouble meeting expenses on basic needs, the income they earn from
moonlighting should not be factored into per capita income calculations when assessing eligibility for
assistance schemes, he adds.
Mum and Dad... and Grab: when delivering food is a family affair
It is a family affair - both mummy and daddy are GrabFood delivery riders, and this has helped them provide for their children. Ms
Taslinna Tahrunshah, 29, delivers food on her bicycle from 8am to 6pm every day, before picking up her one-year-old daughter from
an infant care centre. She has six children and rotates her GrabFood work shift with that of her husband, Mr Wildan Muhammed, 23.
He leaves the house on a bicycle to deliver food from 8am to 2pm, then goes home to care for the children when their classes end.
He leaves for work again at 4pm and continues till 9pm, while Ms Taslinna puts the children to bed. "We each earn about $1,200 and
it is not enough to support the family," says Ms Taslinna, who quit her security officer job of nine years because its 12-hour shift did
not allow her to care for her children. All eight live in a one-room flat.
Families, such as Ms Taslinna's, which do not have an alternative stable source of income, are especially
vulnerable in risky gig economy jobs where change may be the only constant. [Theseira] notes: "It's quite
high risk because as we've seen, work in the gig economy is vulnerable to regulatory developments and also
to market conditions. The gig economy really is an oligopoly - the delivery platforms are free to change
compensation at will, and it's likely that any move by one to reduce wages will be quickly followed by the
others." Foodpanda in Malaysia changed its payment policy for riders at the end of September, resulting in
protest strikes in several states because riders' salaries were affected. In Singapore, private hire
compensation via incentive schemes has also fallen over the past year …
[Theseira] points out: "Some issues we need to look at are whether there is a pathway for those family
members with marketable skills or qualifications to look for jobs that use those skills. And if so, is the pay and
conditions of work in skilled jobs sufficient to outweigh the temptation of working in the gig economy?"
Regarding support for families with dual-income parents in gig jobs, Ms Kanak Muchhal, who is in charge of
women and befriender support at charity group Daughters Of Tomorrow (DOT), says employers can give
mothers rotating shifts that work for them, such as several times a week on the days they can make it, or
working only in the afternoon. She adds that DOT has an arrangement with its partner, beauty chain Sephora,
where mothers can choose shifts convenient for their schedules.

CAN YOU THINK OF OTHER INSTANCES WHERE TECHNOLOGY CAN BE HIJACKED FOR NEFARIOUS PURPOSES?
[63] Technology is terrorism’s most effective ally. It delivers a global audience
Jason Burke. The Guardian. Mar 17, 19
www.theguardian.com/commentisfree/2019/mar/17/technology-is-terrorisms-most-effective-ally-it-delivers-a-global-audience

Terrorism is effective because it always seems near. It always seems new. And it always seems personal.
Ever since the first wave of terrorist violence broke across the newly industrialised cities of the west in the late
19th century this has been true.
It feels personal because, although statistics may show we are many times more likely to die in a banal
domestic accident, we instinctively conclude from an attack on the other side of the street, the city or, in the
case of New Zealand, the other side of the world, we might be next.
Terrorism always seems near – at least when it happens in an environment resembling our own – because
the shocking images on our phones, televisions or newspapers erase the distance between us and the source
of danger. It always seems new because although each attack follows a familiar timeline – the first reports
amid chaos and confusion, statements by police and politicians, analysis from commentators waking up in
successive time zones, the identification of attackers and victims, condolences and flags at half-mast, debates
about radicalisation etc – each is unique.
In the 1970s, terrorism expert Brian Michael Jenkins famously said that “terrorism was theatre”. This
succinctly captured its spectacular, performative nature. These days, it seems more like an endless TV series
that everyone wishes was over but that everyone watches nonetheless. That Brenton Tarrant, the 28-year-old
Australian who shot dead 49 worshippers at two mosques in Christchurch, broadcast the attack live on
Facebook via a helmet-mounted GoPro camera is a new low, but a logical and sadly inevitable one.
Most people naturally assume that the technological changes that influence terrorism are those involving
weapons or explosives. But the big shifts in this field happened long ago. Dynamite was patented in 1867 and
automatic weapons became widespread after the Second World War. If terrorists have made the occasional
use of chemical weapons or a hijacked plane, the vast proportion of modern-day attacks use technology that,
in its essentials, is not new. What has changed beyond recognition are the media that enable individuals or
groups to disseminate their message.
The significance of this is often missed because we are too focused on the violence. Terrorism is “propaganda
by deed”, the term coined in the 19th century by its first modern practitioners. Violence alone is not enough.
That violence has to terrorise – inspire irrational fear and so change minds – but has to radicalise and mobilise
too. It has to send a message to enemies, supporters and, perhaps most importantly, those who are neither.
40
Al-Qaida and Isis made this explicit. So did Tarrant’s “manifesto”. Every change in media technology over the
past half century or more, arguably much longer, has made it easier for terrorists to achieve this aim. In the
1950s and 60s, radio and the new photojournalism meant violence could influence public opinion thousands
of miles away in colonial powers. So extremists chose terror tactics in the last days of the British mandate in
Palestine and during the Algerian war of independence against France.
In the 1970s, terrorists were quick to grasp the potential of overseas TV broadcasts. The terrorist attack on
Israeli athletes at the Munich Olympics in 1972 was covered live by the world’s networks. The operation had
been designed to exploit this new capacity. In the 1990s, it was the advent of satellite TV. No longer did
western editors or those working for Arab regimes control what news went out to the masses of the Middle
East. The channels of communication were becoming more direct and that was hugely empowering for the
militants. On 11 September 2001, Osama bin Laden knew no one could stop live images of his group’s
massive attacks in the US reaching the billion people in the Islamic world who were his primary audience.
Then came the biggest change: digital. As media organisations evolved, so did terrorist ones. Top down was
out, peer to peer was in. There were citizen journalists who followed broad guidelines but were not formally
affiliated to an organisation and “freelance” terrorists who did much the same. The mainstream media were
increasingly redundant. Why fight to get on the BBC or al-Jazeera if you could just create your own channels
and reach your audience directly? Isis showed how effective that could be.
Right-wing extremists were slower to exploit the potential of this seismic shift. Now, with the Christchurch
attack, they have caught up. There have been live streams of terror attacks before – a French extremist
streamed on Facebook the knife murder of a policeman and his partner in 2016 – but none as high profile.
It is often said we get the media we deserve but that is a simplification. But the media, like terrorism, are part
of our societies and, like terrorism, are influenced by broader trends. Perhaps the most striking element of the
atrocity in New Zealand is how the filming of the video was an integral part. “Let’s get this party started,”
Tarrant says, as he gets into his car, talking directly to the viewer. He shoots images of his face in a twisted
version of that most contemporary of phenomena: the selfie. The point of the attack is not just to kill Muslims,
but to make a video of someone killing Muslims.
Tarrant said in his “manifesto” that he did not seek to die, but accepted that might happen. But he will still be
seen as a martyr to the cause by supporters. The word martyr is of Greek origin and refers to a witness. The
Arabic equivalent has similar roots. Witnesses need an audience or their acts are empty. For some terrorists,
that witness is God alone, but these are few. For a growing number, that audience, via Facebook, via virtually
unmoderated sites in the dark corners of the web, via the mainstream media they so detest and suspect, is
everyone.
In Tarrant’s world, on his live stream, in his own mind and those of his followers, he is a warrior, a racial hero,
a leader but also, in a wider contemporary sense, a celebrity, if only for a moment. In a terrible, twisted way,
he is not wrong.

DO YOU SHARE THE WRITER’S PERSPECTIVE?

[64] What science needs is something scientists haven't thought of


Noah Smith. The Straits Times. Nov 7, 18
www.straitstimes.com/opinion/what-science-needs-is-something-scientists-havent-thought-of

In a recent Forbes article, astronomer and writer Ethan Siegel called for a big new particle collider. His
reasoning was unusual. Dr Siegel argues that an even bigger (and much more expensive) collider should be
built on the chance that it discovers some as-yet-undreamt-of new phenomena. But fortunately, governments
seem unlikely to shell out the tens of billions of dollars required, based on nothing more than blind hope that
interesting things will appear.
Typically, particle colliders are created to test theories - physicists' maths show that undiscovered particles
ought to exist, and experimentalists use colliders to see if they really do. This was the case with the Large
Hadron Collider built in Europe with the express purpose of detecting the elusive Higgs boson. It succeeded
at that task, earning a Nobel Prize for the theorists who first predicted the particle. But particle physics is
running out of theories to test. The Higgs discovery puts the capstone on the so-called standard model of
particle physics. Assuming the Hadron Collider doesn't pop out any completely unexpected new particles -
which it has so far shown no sign of doing - it leaves theoretical physicists with nowhere to go. Particle
physicists have referred to this seeming dead-end as a nightmare scenario. But it illustrates a deep problem
with modern science. Too often, scientists expect to do bigger, more expensive versions of the research that
worked before. Instead, what society often needs is for researchers to strike out in entirely new directions.
During the past few decades, a disturbing trend has emerged in many scientific fields. The number of
researchers required to generate new discoveries has steadily risen: This doesn't mean the research is no
longer worth doing. But it does suggest that specific scientific and technical fields yield diminishing marginal
returns. In the 1800s, a Catholic monk named Gregor Mendel was able to discover some of the most
fundamental concepts of genetic inheritance by growing pea plants. In the 1960s, a handful of scientists at
41
university labs discerned the basic structure of DNA. A few decades later, a large team of scientists sequenced
the entire human genome for a little less than US$3 billion. Now, biotech venture capital spends more than
that in a single year - one can receive hundreds of millions of dollars to discover narrow applications of the
grand ideas that began in Mendel's modest garden.
If scientific fields are like veins of ore - where the richest and most accessible portions tend to get mined out
first - how does technology keep advancing? Partly by throwing more money at the problem, but also by
discovering new veins. The universe of scientific fields isn't fixed. Today, artificial intelligence (AI) is an
enormously promising and rapidly progressing area, but back in 1956, when a key early conference on the
topic was held, it was barely a glimmer in researchers' eyes. To keep rapid progress going, it makes sense to
look for new veins of scientific discovery. Of course, there's a limit to how fast that process can be forced - it
wasn't till computers became sufficiently powerful and data sets sufficiently big, that AI really took off.
But the way that scientists now are trained and hired seems to discourage them from striking off in bold new
directions. Scientists train to work in specific fields - a physics major studies to become a particle physicist,
and gains experience by working under senior, more established particle physicists. They are allowed some
leeway to strike out on their own, but their skills and specialised knowledge - what economists call human
capital - are oriented towards extending and continuing the work of their advisers. In other words, they get
pointed towards working in areas of diminishing returns. This means that as projects like the Hadron Collider
require more particle physicists, more researchers are trained to become particle physicists, perpetuating the
cycle. Not only does this process direct researchers away from novelty, but it ignores society's most
pressing needs. Discovering the Higgs boson, or spinning theories about the origin of the universe, may help
to satisfy human curiosity, but it doesn't do much to generate useful technologies in the short term. With
climate change a looming crisis, the need to discover sustainable energy technology - especially better power
storage - rivals the danger faced by the United States in World War II, when many of the country's best
physicists were called on to drop their research and join the Manhattan Project. Plumbing the secrets of the
cosmos is a good thing, but at present, there are much more important areas that demand the talents of the
world's most brilliant scientists.
Science thus needs less iteration and more reallocation. Researchers should be prompted to get exposure
to a wider array of mentors. They also should be given more leeway to focus on their own ideas. Some
sciences might even take inspiration from the field of economics, where - for better or worse - methodological
novelty tends to be rewarded above all else. Granting agencies also can do their part. Giving more money
to cheap, highly novel projects can help reorient science towards finding new areas of knowledge. A model
might be the Defence Advanced Research Projects Agency, renowned for its pursuit of cheap, fast
breakthroughs with ad hoc teams of researchers pulled from a variety of universities and labs.
So what science needs isn't an even bigger particle collider; it needs something that scientists haven't thought
of yet.

42

You might also like