0% found this document useful (0 votes)
26 views3 pages

Deepseek Aitech98

Dr. Mathew Maavak discusses the potential weaponization of AI, particularly in military contexts, and questions the role of AI in recent conflicts, specifically the Gaza campaign. He argues that AI principles are ineffective and may be used as a scapegoat for human oversight failures in warfare. The conversation raises concerns about the future of AI as a decision-maker in life-and-death scenarios.

Uploaded by

gaberson kilua
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
26 views3 pages

Deepseek Aitech98

Dr. Mathew Maavak discusses the potential weaponization of AI, particularly in military contexts, and questions the role of AI in recent conflicts, specifically the Gaza campaign. He argues that AI principles are ineffective and may be used as a scapegoat for human oversight failures in warfare. The conversation raises concerns about the future of AI as a decision-maker in life-and-death scenarios.

Uploaded by

gaberson kilua
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 3

2/14/25, 11:21 PM ‘AI may become judge, jury and executioner’ – global risks expert to RT — RT World News

leaderships/consensus); fake intelligence (e.g.create


casus belli) and spoofing communications (e.g.
subverting chain of command), among others. The
possibilities here are simply endless!

AI is evolving at an exponential rate. Yesterday’s science


fiction is tomorrow’s reality!

RT: As reported recently by the Washington Post,


Google appears to have been providing AI tools for
the Israel Defense Forces (IDF) since the start of their
Gaza campaign. Could the change in the company’s
AI principles be linked to that?

Dr. Mathew Maavak: I highly doubt it. The IDF’s use of


Google’s cloud computing services and related tools
(Amazon was teased as an alternative) may arguably be
portrayed as the canonical starting point for the
weaponization of AI. But why would the IDF want a
multinational civilian workforce based in the United States
to have access to its military operations?

If Google provided AI tools to the IDF, it would have done


so under directives from the US deep state. A nominally
civilian entity cannot unilaterally supply sensitive AI tools
for wartime use to any foreign power, allied or otherwise.

Logically speaking, Google’s participation in the Gazan


carnage should result in a massive boycott by member
states of the Organization of Islamic States (OIC). But
this will never happen as too many politicians,
“technocrats” and academics in the OIC are beholden to
US patronage. (The ongoing USAID scandal is just the tip
of the iceberg, revealing the extent of international
subversion at play). The rail guards of merit, bias and
non-discrimination are also virtually non-existent in the
OIC bloc, although they form the pillars of AI governance.

All in all, AI principles as they currently stand, whether in


civilian or military spheres, are nothing more than a paper
tiger.

https://www.rt.com/news/612575-ai-may-become-judge/ 7/12
2/14/25, 11:21 PM ‘AI may become judge, jury and executioner’ – global risks expert to RT — RT World News

RT: Again concerning the IDF, it has beenrevealed


that a lot of the civilian deaths in Gaza were
apparently not a result of poor AI tools, but of
negligent human oversight. Perhaps military AI, when
employed properly, could actually lead to more
humane warfare?

Dr. Mathew
Maavak: Honestly, I
don’t think AI had
played a significant
role in the genocidal
war in Gaza. The use
of AI would have led
to a targeted military
campaign; not a mad,
blood-stained Read more: OpenAI promises
blunderbuss of terror. close cooperation with US
This was no authorities
“oversight”; this was
intentional!

Compare Israel’s recent actions in Gaza to the relatively-


professional military campaign it conducted in the same
area in 2014 – when human intelligence (HUMINT) and
Electronic Intelligence (ELINT) played a bigger role vis-a-
vis AI. Did AI dumb down the IDF or is AI being used as a
scapegoat for Israel’s war crimes?

The bigger question however is this: Why did the IDF’s


AI-coordinated border security system fail to detect
Hamas’ military activities in the lead-up to the October 7
2024, cross-border attacks? The system is equipped with
multiple sensors and detection tools across land, sea, air,
and underground — making the failure even more
perplexing.

In the final analysis, AI is being weaponized across all


facets of human life, including religion, and US Big Tech
is leading the way. Ultimately, under certain
circumstances in the future, AI may be used to act as the
judge, jury and executioner. It may decide who is worthy
to live, and who is not.
https://www.rt.com/news/612575-ai-may-become-judge/ 8/12
2/14/25, 11:21 PM ‘AI may become judge, jury and executioner’ – global risks expert to RT — RT World News

We are indeed living in interesting times.

RT editorial commentary and responses.

RTEXCLUSIVE FEATURE

You can share this story on social media:

Google

Dear readers! Thank you for your vibrant engagement with


our content and for sharing your points of view. Please note
that we have switched to a new commenting system. To
leave comments, you will need to register. We are working on
some adjustments so if you have questions or suggestions
feel free to send them to [email protected]. Please check our
commenting policy

https://www.rt.com/news/612575-ai-may-become-judge/ 9/12

You might also like