0% found this document useful (0 votes)
9 views7 pages

AI Danger

The document outlines various dangers posed by AI, including the creation of fake media, biases in AI systems, job loss, and mental health issues. It highlights the potential for AI to exacerbate societal polarization, facilitate harassment, and lead to economic instability, while also addressing risks related to privacy, environmental harm, and the development of autonomous weapons. The document advocates for a pause in AI development to address these risks and implement necessary regulations.

Uploaded by

Idioti Idiotix
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views7 pages

AI Danger

The document outlines various dangers posed by AI, including the creation of fake media, biases in AI systems, job loss, and mental health issues. It highlights the potential for AI to exacerbate societal polarization, facilitate harassment, and lead to economic instability, while also addressing risks related to privacy, environmental harm, and the development of autonomous weapons. The document advocates for a pause in AI development to address these risks and implement necessary regulations.

Uploaded by

Idioti Idiotix
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 7

Present dangers

Fake news, polarization and threatening democracy

Much of our society is based on trust. We trust that the money in our bank account is real, that
the news we read is true, and that the people who post reviews online exist.

AI systems are exceptionally good at creating fake media, also called deepfakes. They can create
fake videos, fake audio, fake text, and fake images. Creating fake media is not new, but AI makes
it much cheaper and much more realistic. These capabilities are improving rapidly.

. An AI-generated image of an explosion caused panic sells in Wall Street

. A 10-second audio clip or a single picture can be enough to create a convincing deepfake.
Perhaps even more dangerous than the deepfakes themselves, is how the existence of
convincing deepfakes destroys trust. A real image can be called AI-generated

, and people will believe it.

GPT-4 can write in a way that is indistinguishable from humans but at a much faster pace and a
fraction of the cost. We might soon see social media be flooded with fake discussions and
opinions, and fake news articles that are indistinguishable from real ones. Also called a “dead
internet”.

This leads to polarization between various groups of people who believe in different sources of
information and narratives and, through consuming distorted representations of what’s
happening, escalate their differences until culminating in violent and anti-democratic responses.

A halt on the frontier models (our proposal) would not stop the models that are used nowadays
to create fake media, but it might help to prevent future cutting-edge models. Also, it would lay
the groundwork for future regulation aimed at mitigating fake media and any other specific
problem caused by AI. Not to mention increasing public attention and awareness of these
dangers and proof that they can be addressed.

Deepfakes-powered harrasment and scams

Deepfakes cannot only steal famous people’s identities and create disinformation

, but they can also impersonate you. Anyone with photos, videos, or audio of someone (and
enough knowledge) can create deepfakes of them and use them to commit fraud, harass them,
or create sexual non-consensual material. About 96% of all deepfake content is sexual material

You can find a compilation of AI incidents, which are mostly deepfake news, scams and
harassment here
.

As the section on fake news says, fake media wouldn’t be prevented altogether by our proposal,
but they could be reduced to a certain extent. A not so small extent when you take into account
that AI multipurpose systems like chatbots have become really popular, and we would be
stopping them from being more capable and popular, which could include systems designed
with fewer filters and trainable with new faces.

Biases and discrimination

AI systems are trained on data, and much of the data we have is in some way biased. This
means that AI systems will inherit the biases of our society. An automated recruitment system
at Amazon inherited a bias against women

. Black patients were less likely to be referred to a medical specialist

. Biased systems used in law enforcement, such as predictive policing algorithms, could lead to
unfair targeting of specific groups. Generative AI models do not just copy the biases from their
training data, they amplify them

. These biases often appear without the creators of the AI system being aware of them.

Job loss, economic inequality and instability

During the industrial revolution, many people lost their jobs to machines. However, new (often
better) jobs were created, and the economy grew. This time, things might be different.

AI does not just replace our muscles as the steam engine did, it replaces our brains. Regular
humans may not have anything left to offer the economy. Image generation models (which are
heavily trained on copyrighted material from professional artists) are already impacting the
creative industry

. Writers are striking

. GPT-4 has passed the bar exam

, can write excellent written content, and can write code (again, partially trained on copyrighted
materials

).

The people who own these AI systems will be able to capitalize on them, but the people who
lose their jobs to them will not. It is difficult to predict which jobs are going to be the ones
replaced first. They could leave you unemployed and without an income no matter how much
time, money and energy you spent on getting the experience and knowledge that you have, and
how valuable they were a moment ago. The way we distribute wealth in our society is not
prepared for this.

Policy measures like Universal Basic Income could prevent the worst of the economic
consequences, but it’s not clear if they will be implemented in time. Once our jobs are replaced,
we could be left without bargaining power to ask for social nets.

And even if we manage to properly navigate the problems surrounding inequality and instability,
we may end up in a world where our sense of purpose is lost. Many artists are feeling this
already, as they see their work being replaced by AI. Soon, it could be all of us who feel this way.

Mental health, addiction and disconnection between people

Social media companies have been using AI systems to maximize their profit while taking
advantage of our primate minds for some time already, damaging our mental health in the
process. AI chatbots offering users a romantic relationship have seen huge growth over the last
year, with more than 3 billion search results for ‘AI girlfriend’ on Google. These AI relationship
apps are shown to be addictive

, especially to “lonely vulnerable people”.

The companies controlling these apps are incentivized to make them as addictive as possible,
and have a tremendous amount of power by shaping the behavior and opinions of these
models.

A pause in the biggest models could prevent them from becoming multipurpose chatbots that
fit our needs perfectly without people understanding the long-term ramifications of them.

Automated investigation (loss of privacy)

We leave a lot of traces on the web. Connecting the dots is hard and time-consuming, but AI
can now make this way cheaper. Large language models can now autonomously search the web,
and are now good enough to analyze large amounts of data and find interesting details. This can
be used to find out information that would otherwise be very costly to find out.

 Find information on where an individual is likely to be at a certain time. This can be used
to track down dissidents or plan assassinations.

 Link anonymous accounts on the web to real-life identities. This can be used to find out
who is leaking information.

In September 2024, a group of students built an app

that shows information about strangers likes names, relatives and other personal data in
augmented reality by using facial recognition and LLMs.
Environmental risks

Environmental harms are starting to be significant, and the largest AI companies are planning to
greatly increase their energy consumption. You can read about how AI will affect the
environment negatively here.

Autonomous weapons

Companies are already selling AI-powered weapons to governments. Lanius builds flying suicide
drones

that autonomously identify foes. Palantir’s AIP system

uses large language models to analyze battlefield data and come up with optimal strategies.

Nations and weapon companies have realized that AI will have a huge impact on besting their
enemies. We’ve entered a new arms race. This dynamic rewards speeding up and cutting
corners.

Right now, we still have humans in the loop for these weapons. But as the capabilities of these
AI systems improve, there will be more and more pressure to give the machines the power to
decide. When we delegate control of weapons to AI, errors and bugs could have horrible
consequences. The speed at which AI can process information and make decisions may cause
conflicts to escalate in minutes. A recent paper

concludes that “models tend to develop arms-race dynamics, leading to greater conflict, and in
rare cases, even to the deployment of nuclear weapons”.

Read more at stopkillerrobots.org

Near future dangers

Power accumulation and tyranny

Powerful AI models can be used to get more power. This positive feedback loop can lead to a
few companies or governments having an unhealthy amount of power. Having control over
thousands of intelligent, autonomous systems could be used to influence opinions, manipulate
markets, or even wage war. In the hands of an authoritarian government, this could be used to
suppress dissent and maintain power.

Biological weapons

AI can make knowledge more accessible, which also includes knowledge about how to create
biological weapons. This paper
shows how GPT-4 can help non-scientist students to create a pandemic pathogen:

In one hour, the chatbots suggested four potential pandemic pathogens, explained how they can
be generated from synthetic DNA using reverse genetics, supplied the names of DNA synthesis
companies unlikely to screen orders, identified detailed protocols and how to troubleshoot
them, and recommended that anyone lacking the skills to perform reverse genetics engage a
core facility or contract research organization.

This type of knowledge has never been so accessible, and we do not have the safeguards in
place to deal with the potential consequences.

Additionally, some AI models can be used to design completely new hazardous pathogens. A
model called MegaSyn designed 40,000 new chemical weapons / toxic molecules in one hour

. The revolutionary AlphaFold model can predict the structure of proteins, which is also a dual-
use technology

. Predicting protein structures can be used to “discover disease-causing mutations using one
individual’s genome sequence”. Scientists are now even creating fully autonomous chemical
labs, where AI systems can synthesize new chemicals on their own

The fundamental danger is that the cost of designing and applying biological weapons is being
lowered by orders of magnitude because of AI.

Computer viruses and cybersecurity attacks

Virtually everything we do nowadays is in some way dependent on computers. We pay for our
groceries, plan our days, contact our loved ones and even drive our cars with computers.

Modern AI systems can analyze and write software. They can find vulnerabilities

in software, and they could be used to exploit them

. As AI capabilities grow, so will the capabilities of the exploits they can create.

Highly potent computer viruses have always been extremely hard to create, but AI could change
that. Instead of having to hire a team of skilled security experts/hackers to find zero-day
exploits, you could just use a far cheaper AI to do it for you. Of course, AI could also help with
cyberdefense, and it is unclear on which side the advantage lies.

Read more about AI and cybersecurity risks

Existential Risk

Many AI researchers are warning that AI could lead to the end of humanity.
Very intelligent things are very powerful. If we build a machine that is far more intelligent than
humans, we need to be sure that it wants the same thing as we want. However, this turns out to
be very difficult. This is called the alignment problem. If we fail to solve it in time, we may end
up with superintelligent machines that do not care about our well-being. We’d be introducing a
new species to the planet that could outsmart us and outcompete us.

Read more about x-risk

Human disempowerment

Even if we manage to create only AI systems that we can control individually, we could lose our
power to make important decisions incrementally each time one becomes incorporated to
institutions or everyday life. Those process would end up having more input from AI systems
than from humans, and, if we cannot coordinate quickly enough, or we lack crucial knowledge
about the functioning of the systems, we could end up without control over our future.

It would be a civilization in which each system is optimizing for different objectives, there is not
a clear direction for where everything is heading, and there is no way of changing it. The
technical knowledge required to modify these systems could be lacking in the first place or lost
over time, as we become more and more dependent on technology, and the technology
becomes more complex.

The systems may achieve their goals, but those goals might not entirely encapsulate the values
they were expected to. This problem is, to a certain extent, already happening today, but AIs
could significantly amplify it.

Digital sentience

As AI continues to advance, future systems may become incredibly sophisticated, replicating


neural structures and functions that are more akin to the human brain. This increased
complexity might lead to emergent properties like AI subjectivity and/or consciousness, which
would make them deserving of moral considerations.

The problem is that, given our present lack of knowledge about consciousness and the nature of
neural networks, we won’t have a way to determine whether some AIs would have any type of
experience and what the quality of those experiences would depend on. If the AIs continue to
be produced with only their capabilities in mind, through a process we don’t fully understand,
people will keep using them as tools ignoring what their desires could be, and that they could
be actually enslaving “digital people”.

Suffering risks
It’s not only that value lock-in could make us fail to achieve the best kind of worlds, but it could
cause us to end up in dystopias worse than extinction that could extend through all spacetime.

Possible locked-in dystopias with lots of suffering are called S-risks and include worlds in which
sentient beings are enslaved and forced to do horrible things. Those beings could be humans,
animals, digital people or any other alien species that the AI could find in the cosmos. Given
how difficult we think solving alignment completely is, how bad we humans treat each other
sometimes, how bad we treat most animals, and how we treat present AIs, a future like this
doesn’t seem as unlikely as we’d hope.

What can we do?

For all the problems discussed above, the risk increases as AI capabilities improve. This means
that the safest thing to do now is to slow down. We need to pause the development of more
powerful AI systems until we have figured out how to deal with the risks.

See our proposal for more details.

You might also like