0% found this document useful (0 votes)
5 views47 pages

11 30

The document discusses the significant impact of artificial intelligence on democratic elections globally, emphasizing the challenges posed by misinformation and deep fakes. It highlights the upcoming elections in 2024, the need for regulatory measures, and the collaboration between various experts and organizations to address these issues. Key speakers, including Karen Yarjimilo and Hillary Clinton, express concerns about the integrity of elections and the necessity of fostering democratic resilience in the face of technological advancements.

Uploaded by

kater
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views47 pages

11 30

The document discusses the significant impact of artificial intelligence on democratic elections globally, emphasizing the challenges posed by misinformation and deep fakes. It highlights the upcoming elections in 2024, the need for regulatory measures, and the collaboration between various experts and organizations to address these issues. Key speakers, including Karen Yarjimilo and Hillary Clinton, express concerns about the integrity of elections and the necessity of fostering democratic resilience in the face of technological advancements.

Uploaded by

kater
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 47

Hello. Oh, hello and welcome. I'm Karen Yarjimilo.

I am the dean of the school


of International Public Affairs and the Adley Stevenson Professor of
International Relations. Well, I cannot think of a timely or more important topic
to discuss than the role of artificial intelligence on democratic elections
around the globe, and we are thrilled to be partnering with Aspen Digital on
this effort. Over the course of the current year, about 2 billion people across
80 countries are expected to go to the polls to cast ballot. Carrying out free
and fair elections is difficult even under normal circumstances. But in the
current information ecosystem, it is even more challenging. As we know, the
health of a democracy is linked to the integrity of information. And those who
wish to spread lies and miss and disinformation have never had the tools to
do so at the speed, need and scale artificial intelligence allows. No wonder
that democracies are facing stiff headwinds as autocrats from Moscow to
Myanmar challenged the geopolitical order and are using AI to tighten the grip
on power. And I vividly recall why what Maria Ressa, one of IGP's Carnegie
Distinguished Fellows, told last year's graduating class of CPA students. She
described the lack of government oversight on these technologies, of these
technologies as Doomsday Clock for democracy. And we have already seen
disturbing evidence of the dangers posed by AI to the integrity of elections. In
Slovakia, for example, there was deep faked audio of a candidate who appear
to be conspiring to rig the election in indo. In Indonesia, deep fake technology
was used to portray Suarto, the country's longstanding dictator who's been
dead for 16 years, telling people to vote today. And this is only the tip of the
AI iceberg. These misuses of technology are all happening as governments
and tech companies are scaling back their resources to police fake content
and fight disinformation.

Today's panels will examine their four topics, including risks to 2024 elections
worldwide, lessons from recent global elections, and the role of respond
responsibilities of the tech companies. We'll also talk about implications for
this year's US elections in particular, and I know many of you are interested in
this.

We are fortunate, really fortunate to have so many of the world's leading


experts and practitioners on this subject with us here today, I am hopeful that
today's discussions gives us some reasons to be optimistic for the future of AI
and democracy. Although knowing some of the panelists, I'm not sure about
the optimism part. We're already seeing some signs of progress, and we
should mention them. Europe has passed promising new regulations to rein in
tech companies and increased transparency and accountability. And here in
the United States, lawmakers in 32 states have introduced 52 bills to regulate
deep fakes in elections. And we have a lot of work to do here in the United
States and elsewhere.
And I hope that this form, this conversation will move us forward. I'm thinking
about the problems, the challenges, and the potential solutions. So before I
turn it over to our co host, I just wanna thank everyone again for coming and
participating in today's panels. I wanna thank the staff who organize this.

Look, this event is the embodiment of IGP as an organization. We are


discussing here important topic that cuts across so many of our global
challenges. Here at Tiepa, we are focusing on technology and innovation, and
this is a big part of what we will be looking at and doing. We're looking in
democratic resilience. And here, this is exactly in the intersection of these
two. And I'm very excited about the projects we have coming up in the
pipeline, looking at those with our faculty, with our fellows and definitely more
to come. This is, again, we're doing this, bringing together the best of the
private and public sectors to generate new ideas and solutions that are based
on data and evidence. And that is why the G, the Institute of Global Politics,
was created. This is exactly the vision that Secretary Clinton and I had when
we started this. How do we take this on? What can we do to help? And this
convening cannot be more relevant and important today.

And with that, it is my great honor to introduce our co host of today's event,
Vivian Schiller. One of the highlights in the, of this work that we're doing here
at IGP, is how we are joining forces with incredible partners on meaningful
issues. And when Vivian and I first met, we knew immediately that we wanted
to collaborate, we wanted to work together. We were really passionate about
the same topics. And I'm so glad that we were able to do this and get to this
point that we are doing this event Today.

Vivian joined the Aspen Institute, where she is the vice president and leads
Aspen Digital, just four years ago after a long career in the intersection of
news media and technologies with stints that included presidents and CEO of
NPR, head of New York times.com, chief digital officer for NBC News and
Global Head of news at Twitter, among other things. Please join me in
welcoming Vivian Schiller.

Thank you so much, Karen. I remember all too well our meeting in your office.
It wasn't even really that long ago. It was in the fall. And it was just this
instant spark and we're like, yes, we have to do this. And wow, here we are
already. Anyway, I just, I, on behalf of the Aspen Institute and Aspen Digital,
we just are so honored to partner with SIPA on this important conversation.

It's gatherings like this today that remind us that really a key measure of any
democracy is its capacity to absorb and adapt to change. In the short life of
this country, which is actually, I was reminded, younger than the life of this
university, people in good faith have come together to meet moments of
tremendous disruption. We are capable of it. We are capable of marshaling our
ideas to mitigate the bad and particularly to harness the good that reforms,
into reforms that reflect and preserve democratic values.

It's important for us to remember we are capable of doing this, particularly as


we today draw on the example of global elections that have happened to date
in an effort to learn about the AI driven challenges ahead for this country's
elections here in the US in this way, we are really engaged in a quintessential
American tradition, forming a democratic response to emerging technologies.
So AI advances, it's really important to say at the get go, have tremendous
potential for good. And we also know that those who would seek to disrupt
free and fair elections because they have tried before, are eager to enlist AI.
Tools in their ongoing effort to undermine trust in democratic institutions, to
pollute our information environment, and to distract or discourage voters.

This is true ahead of this November elections and it will undoubtedly remain
true for the foreseeable future. So language tools, for example, can be co
opted to mislead minority communities. Audio tools can spoof the voice of a
candidate or an election official to demobilize or disincentivize voters. Fake
images or videos can be used to deceive the public at particularly critical
moments.

And to me, what's even more alarming is even if voters are not fooled, we
now have to prepare to navigate a world where it's easier to completely
dismiss evidence based reality. We could be living in a world where everyday
people will simply stop believing anything they see, anything they hear. And
to me, honestly, that's more terrifying than anything else I can think of.

It is part also of the autocrat's playbook. So this is what we're up against. In


the words of our next speaker, it will take a village, in this case, to build social
resilience that resists the pole to become a suspicious and stubborn people.
We will have to insist that the truth is knowable and worth knowing. And we
need to learn to trust again institutions, information and each other.

The democracy practitioners and experts here today are helping to define
what democratic resilience will mean in the face of these challenges. And by
the way, I should mention, in addition to our speakers, there's so many folks I
know in the room here who are working hard at these efforts even if they're
not on stage.

And we thank you for being here. I am confident that we have. What does it
takes to all of us, the we, this country, to meet this moment and to secure a
democratic future in the age of AI, as surely we must.
I am now pleased to introduce our first panel. Our first panel is called setting
the Stage, Risks to the 2024 Global Elections. And with us, I am so honored to
introduce Secretary Hillary Rodham Clinton. She is professor of international
and public affairs, Columbia universities of, for Columbia University, the 67
Secretary of state and former senator from New York.

IGP faculty advisor and Board Chair, Averi Erova, the Vice President for values
and transparency with the European Commission. Hello, Madam Vice
President. Maria Ressa, my friend of many decades, Nobel Priest Prize winning
journalist, co founder, CEO and president of Rappler IGP, Carnegie
Distinguished Fellow and moderating is the spectacular journalist, Jillian Tat,
columnist and part of the editorial board for the Financial Times. I welcome
you to the stage.

Well, thank you very much indeed, Vivien. And the Dean for that wonderful
introduction, which sets out the issues. I should say that I'm personally
absolutely thrilled and honored to be moderating this panel because first of
all, I am a journalist who cares deeply about the truth at a time where it's
under threat. An American citizen who's deeply worried about the election
that's coming up. And also when I'm not a journalist, I'm attached to
Cambridge University, King's College as an anthropologist trained in digital
ethnography, and passionately believe that the only way to handle AI
responsibly is to add a second AI, which is anthropology intelligence to
understand the social impact.

And this is what this afternoon's gonna be all about. And I'd like to start
perhaps with you, Secretary Clinton, and ask you. I think we first taught many
years ago about the terrible threat of misinformation. I happened to have a
background in Soviet studies and had seen some of the absolutely nutty
misinformation that was going around in the Russian media many years ago,
well before 2016, that were betraying you as some kind of she devil out of
Indiana Jones stalking the world, which was, you know, incredibly corrosive.
And since then, it's got worse and worse. How alarmed are you about the
upcoming elections? And is there anything that we can actually do to stop the
tsunami of misinformation?

Well, Jillian, thanks so much for being here with us and for moderating it. And I
know you cover these issues and have a lot of, you know, experience and
trying to make sense of them. And I think that anybody who's not worried is
not paying attention. There's more than enough reason to be worried about
what we've already seen. But certainly, I think as we're here today doing this
panel and having these other experts and practitioners speak to us, there are
literally people planning how to interrupt, interfere with, distort elections, not
just in the United States but around the world. And so if I could just focus on
the United States for a minute, what Jillian's referring to, I think, was really
motivated by my time as secretary of state doing the job that I was asked to
do by President Obama, representing our values, our interests, our security
around the world. And in the fall of 2011, after Putin announced that he would
be coming back as president, they had elections for the Duma. And they were
so blatantly fraudulent. I mean, videos of people throwing away ballots and
people stuffing ballot box is, this was not made up. This was, you know, very
clear, undeniable distortion and interference. So as secretary of state, I said
that, you know, the Russian people deserve better. They deserved free and
fair elections where their votes would be cast and counted appropriately. And
having literally nothing to do with me. When the news came out about what
had happened in those elections, tens of thousands of Russians, particularly in
Moscow, Saint Petersburg, a few other places, went out into the streets to
protest for their right to actually choose their leaders. And it totally freaked
out Putin. And he actually blamed me publicly for the reaction in Russia. And
that was the beginning of his efforts to undermine and take me down in, you
know, very real time, starting before the 2016 election, but certainly picking
up a lot of steam and impact during that election. And it was such an
unprecedented and really quite surprising phenomenon. I don't think any of us
understood it. I did not understand it. I can tell you my campaign did not
understand it. There, you know, the so called dark web was filled with these
kinds of memes and stories and videos of all sorts, you know, portraying me
and all kinds of, you know, less than flattering ways. And we knew that, you
know, something was going on, but we didn't understand the full extent of the
very clever way in which it was insinuated into social media if it stayed on the
dark web, you know, whatever, you know, maybe couple hundred thousand
people would pay attention.

But this jumped into how we communicate. And the only thing I can say about
it is, well, I could say two things about it. One, it worked. You know, there are
people today who think I have done all these terrible things because they saw
it on the internet and they saw it on the internet in their Facebook feed or
some, you know, Twitter this or snapshot, Snapchat that they were, you know,
following the breadcrumbs and what they did to me was primitive. And what
we're talking about now is the leap in technology that we're dealing with. You
know, they had all kinds of videos of people looking like me but weren't me.
And they had to keep whoever that woman was with her back to the camera
enough so that they couldn't actually, you know, be found out. Now they can
just go ahead, they can take me and in fact, they're experimenting. I've had
people, you know, who are students and experts in this tell me that, you
know, they're, once again, because they've got such a library of stuff about
me, they're using it to practice on and see how more sophisticated they can
get. So I am worried because, you know, having defamatory videos about you
is no fun. I can tell you that. But having them in a way that you really can't
make the distinction that Vivian was talking about, you have no idea whether
it's true or not. That is of a totally different level of threat. So I think, you
know, we're setting the stage in this panel and we've got, you know, two
people who really understand this deeply with our other panelists.

Well, thank you very much indeed. And I'm glad you went back to that
extraordinary story since post 2011. Cuz I think mostly we don't know about
it. I like most people, when I first saw those images very early on, because I
do speak Russian, laughed. I thought it was so ridiculous. And boy, was I
wrong to laugh. It's extraordinary how this has spread and how much more
virulent it is today. And I'd like to turn to you, Commissioner, and ask you,
when you look at the problem you've just had an election in Slovakia where,
as we just heard earlier, there was AI used to manipulate the vote seemingly
successfully. Europe has a lot of elections coming down the track this year.
The European Commission has taken a much more aggressive stance than
Washington in trying to stand up to big tech and control, maybe not
aggressive enough, but control the. I can see your face. You can tell us if you
agree or not, but at least you've tried to challenge Big Tech in some aspects of
their responsibilities. How concerned are you that this year's election in
Europe will be undermined by AI? Yeah, thank.

You very much and thank you for inviting me because this is really an honor to
be in such panel. Well, how worried I am? I don't have any right to be worried.
I have to act because I am the European regulator. And I don't know whether
we were aggressive, but what we did was maybe not aggressive but
necessary because we already have good data, good analysis showing that
most of the elections in the EU member states affect it by at least Russian
propaganda and Russian hidden manipulation these days. Yesterday, the
check and polish secret service is disclosed the data and the facts about the
Russian propaganda and disinformation effect thing, several elections through
the domestic parties. Yeah, they need. Putin doesn't, cannot do it directly from
his mouth to the ears of, it's me. He cannot do.

It. Right. That's not Putin.

You never know. He cannot do it directly from him, his mouth to the ears of
European people. But he needs simply the allies in our member states. We
had to take measures and we are taking measures before European elections,
which you ask about for two reasons. We do not want Mister Putin to start
winning elections in the EU because the purpose of his propaganda is clear, is
a message, stop the support of Ukraine. And he knows that we are all
democracies, so he has to do it through our people. So the purpose is
absolutely clear.
So we take measures, agreement with the platforms to remove deep fakes in
the campaign. So the AI should be very much limited or at least to label it. We
have measures involving the civil society for enhance fact checking. We have
a course on the independent and public service media to take care of the
facts. Because what we speak about is the Protection of evidence based truth.
And Madam Clinton, it's interesting how we politicians try to avoid the word
truths because we will be immediately accused of having our subjective truth
doctor train. So when I speak about the truth, I speak about evidence based
truth. We speak about the facts. And we really believe that our set of
measures might, should have a real impact on the campaigns, that they will
be fair, that they will be transparent, that they will be free of hidden
manipulation by AI.

And maybe last comment, why I am dealing with that. I am commissioner for
values. Can you imagine the shock at the beginning when I got from Rosula
Fonda Lion this portfolio of values. What does she mean by that? And so it's
Protection of rule of law, democracy and fundamental rights. And I would add
also the Protection of the evidence based truth. Because the destiny of the
society which stops valuing the truth is to live in life. And this is what we don't
want in Europe.

Absolutely. Well, that's particularly, you know, pertinent and potent for anyone
emerging from East European countries. I'm just curious before I turn to Maria,
have you been the victim of misinformation yourself? Oh, my God, I can
already hear any attacks. When.

I can ask more on a feed, I see the attacks on both these women. By the way,
you know, our attackers just combine. So, yes. You've been under incredible.

I, yes, I am under attack many years. That's why I also cancel the Facebook
account. And at the end of the year, I might get out of politics totally. So
believe me, I will cancel everything. I am now on Twitter and Instagram. Yes, I
am under permanent attack. I was a 1 out of 2 women mainly, mostly
attacked in czekia and the other one was there was Angle America. As for AI, I
have no complaint because there was just one case of me having Lara Croft's
body. I liked it. So, but maybe we should not joke, make jokes about that
because we see a lot of really harmful things against the girls and women.

Yeah, and that's why we also adopted recently the directive on violence
against women, which con, which contains the chapter on digital violence. It's,
I think the first ever in the democratic state legislation that we are defining
that. And that also in AI Act, we define these kind of practices as something
which has to be punished because we also have to see crime and punishment
in practice.
Absolutely. Whereas someone, he's watching the Muslim moral lot. And one of
the things that horrifies me as how female activists are being silenced by the
use of AI to create pornographic images that are so shameful that it makes it
extremely hard for female activists to continue in that culture. It's absolutely
horrific. It really is. Maria, have you been attacked?

Oh, my gosh. Well, first of all, it's nothing compared to what both these
women have had. And, you know, I think for our American audience, just Vera
Jerova is not only handling her values, but she also has the portfolio of
Margaret Vestager, which means she is the most powerful woman regulating
tech right now. Right. That's, and that's part of the reason, I swear, seriously,
the last time I was on this stage with Hillary, I was attacked by her attackers.
And every time I'm on stage with Vera, we also get. But in.

Fact, that we love each other. We.

Do that. Bottom lines, we're all gonna get attacked the minute we get off
stage. So there we go.

And I think the hard part is you don't know what it's like until you are
attacked. And that's part of the reason I would like some of the men from
Silicon Valley to actually trade places with us for a day or so. Have I been
attacked? Yes. It is a prelude. Bottom up, you say a lie a million times, it
becomes a fact. For me, it was 98 messages per hour. And then a year later,
the same thing that was seated online became cases that were filed by my
government against me. Very slowly, you know, the 21 investigations became
11 criminal charges became only two left after seven years, right. So we
fought in.

But I think the real impact of this, and you've talked about it, but Russia is
really the pioneer and the EU selections, the major democracies around the
world are having elections this year. Where the EU goes, where America goes,
it's really scary for the rest of us in the global south cuz you're not even
acknowledged you're being manipulated if you're a woman. Gender
disinformation is using free speech, I. E. Information warfare, to pound you, to
silence, if you are in a position of power, if you're a journalist, if you're a
human rights activist, if you're a student who stands up for, you know, this
whole thing of woke, like we kind of jump into it, but there are information
operations that seed a lot of this. So this is the world of none, the world of
lies. Let's not even call it. It's a world of lies. It's a world of personalize,
personalization. Which is, and I see so many faces here from, cuz you're
gonna hear from David Granevich, I see Katie Harbath who is also in the
Philippines. So please ask the questions, but more than anything, you can be
attacked.

And it's not just about being attacked. It's the fact that we have lost agency.
We live in different realities, right? Personalization, when you're talking about
buying sneakers is, you know, okay, fine, you're gonna get recommended
sneakers cuz you look for sneakers. That was a long time ago.

Now personalization means that I will give you your reality. But even though
we're in the same shared space, we have a hundred plus realities. That's
called an insane asylum. That is the world we live in today.

Absolutely. Well, it's no accident there's four women sitting on this panel right
now because it really is a strong gender issue. And thank you, Maria, for
pointing out that notwithstanding the inward looking nature of a lot of
American and European politics today, it's not trust a western issue. In many
ways. It's actually harder to tackle in the emerging world right now, which is,
you know, very alarming, but we're gonna hear a lot later on about what can
be done to counter this. Would you like to share any thoughts, Maria? Do you
have thoughts about what you'd like to see to fight back.

I mean, for Americans, get rid of Section 2,30. Because the biggest problem
we have is that there is impunity, right? Stop the impunity. Tech companies
will say they will self regulate. Self regulation comes from news organizations
when we were in charge of gatekeeping the public sphere. But we were not
only just self regulating, there were legal boundaries. If we lie, you file a suit.
Right now, there's absolute impunity and America hasn't passed anything. I
joke that the EU won the race of the turtles and filing legislation that will help
us. It's too slow for the fast lightning pace of tech, right? The people who pay
the price are us, us, this young generation. I was just with Vivek Murthy and,
you know, the Surgeon general of the United States didn't file his report until
may last year. Hillary was probably ground zero for all of the experimentation.
What kind of different world would we live in if she had become president? I
mean, she won't say that, but I will. Right? Like I think.

Many people in the room would think that Secretary Clinton, would you agree
that the first step is to abolish Section 2,30?

It certainly is among the first steps. You know, I think it's very difficult to be as
upset with the tech companies as we are, and I think rightly so since they
were granted this impunity, and they were granted the impunity for a very
good reason back in the late 90s, which is we didn't know what was gonna
happen. We had no idea were they a platform, kind of like a utility which sent
content through it and therefore, you know, they didn't have the kind of
liability and you would go underneath to see where the content came from.
Were they content creators? Did they have a duty either to warn or prevent? I
mean, nobody knew anything because nobody had a real sense of what was
happening. Well, now we do. And shame on us that we are still sitting around
talking about it.

Section 2:30 has to go. We need a different system under which tech
companies, and we're mostly talking obviously about the social media
platforms, operate. And I, for one, think they will continue to make an
enormous amount of money if they change their algorithms to prevent the
kind of harm that is caused by sending people to the lowest common
denominator every time they log on, you've got to stop this reward for this
kind of negative, virulent content, which affects us across the board. But I will
say it is particularly focused on women. The empowerment of misogyny online
has really caused so much fear and led to some violence against women who
are willing to take a stand no matter who they. Are they in entertainment? Are
they academics, are they in politics or journalism, wherever they are, and the
kind of ganging up effect that comes from online? It could only be, you know,
a, a very, a small handful of people in Saint Petersburg or Moldova or
wherever they are right now who are lighting the fire. But because of the
algorithms, everybody gets burned. And we have got to figure out how to
remove the impunity, come up with the right form of liability and do what we
can to try to change the algorithms. And the final thing I would say is we also
need to pass some laws that understand that this is the new assault on free
speech. You know, in our country, people yell free speech. They have no idea
what they're talking about half the time. Yes. And they yell at to stop an
argument, to stop a debate, to prevent legislation from passing. We need a
much clearer idea of what it is we are asking governments to do, businesses
to do in the name of do no harm. And free speech has always had limitations,
always been subject to legislative action and judicial oversight. And we need
to get back into that arena.

Right. Commissioner, I can see you, frankly, scribbling notes. Tell us how, if
you were, you are officially the leading regulatory turtle, what would you do?

Well, I remember last year when I was in Davos, I said similar things as you,
Madam Clinton, about maybe the United States will also have to move
towards less impunity or no impunity online. You cannot imagine. You can. But
I received from Republicans, I was afraid that I will be somehow wanted here
as somebody who is committing horrible crime. But maybe for the EU, it is
easier to legislate the digital space because look at the situation, we, while
the United States have to make a big jump. We were a kind of ready for that
because count with me, illegal content, hate speech, child pornography,
terrorism, violent extremism, racism, son of phobia, antisemitism. We have all
these things in our criminal laws for decades. This is nothing new. So when we
started to think about how to legislate the digital space, we in fact said what
is illegal offline has to be handled as illegal online. So we didn't create any
kind of new crime. It was just pushing the existing law to the digital space. So
that's why for us, this era of adaptation was maybe easier than in us where
you really have to do bigger jump. And if you.

Let me say two more things, eh? Impunity is wrong. Crime and punishment.
Missing in the digital sphere is another crime. I have to say. And we have to
also adapt as the society. I would like to still be alive when I will see strong
rejection from the society that this is not acceptable. We don't like it. If in that
system we are confronted with hate speech and dirty content, we will simply
move somewhere else. So also for the digital companies, it will be a strong
signal that they should not let their business damaged because they need
users.

Yes, so this societal reaction is still missing. I think that it will take some more
years. Last comment on violence against women. We see women
disappearing from public space. And here we speak about politicians and
journalists mainly. And we had a conference, Maria was in Brussels, LA, last
month. And one shocking thing came out that when here I speak about the
politicians, when the political parties want to win elections, they are attracting
women to come because they are well products, good products to sell. Yeah,
sorry, we speak about women. Yeah, yeah, yeah, in campaigns. But then when
the women take the temptation and become politicians, the same political
parties are not honest enough and courage is enough to defend them. So I
see cases of women who are horribly attacked with horrible words like so we
can president. Nobody is defending her. So should we remain alone with that?
I think that there should be also some healthier reaction from the political
parties and from the newsrooms as well.

Well, thank you. Well, sadly, very sadly, we are at a time, you set the scene
fantastically. I take away three key points. One is that if women were running
the world, I think there would be quite a different tone and sense of urgency
to this debate. Secondly, these issues of misinformation are not entirely new. I
mean, they go back a decade, but they have dramatically accelerated in
recent years. And AI is threatening to make it worse. And we have no time to
lose because of the impending elections. And thirdly, we cannot dump the
question of what is happening with the tech companies and their
responsibility. If we want to move forward to some kind of, if not solution than
containment, we'll be hearing from tech companies later on today. We'll be
hearing from another other, a number of other voices about this vital debate.
But in the meantime, can you all please show your thank you to them for a
great comment?
That's not it.

Please. Welcome back to the stage, Secretary Hillary Rodham Clinton.

Hi, how are you? Some major.

Well, if you're not depressed, we'll get you there. I could not be happier to
have these extraordinary panelists follow up on the setting of the stage,
because now we wanna get a little bit deeper and understand the implications
for the upcoming US elections. And we have four amazing panelists. Jocelyn
Benson is the secretary of state of Michigan, and she's been in the eye of the
storm since well before the 2020 election by far. But, you know, since then,
certainly one of the real leaders to try to understand what was happening,
Michael Chertoff, the former secretary of homeland security, co founder and
executive chairman of Shertoff Group. And you know, Michael really has just a
depth of experience about dealing with, originally it was online radicalization
and extremism. And now, of course, based on his knowledge of that set of
threats, he understands, you know, we've got a under, you know, we've got a
face. What's gonna happen in the elections.

Dara Linden mom is the commissioner of the Federal Election Commission of


the United States. And as such, you know, she is part of the group that is
trying to, you know, make sense of where money is being spent and what's
being done with it and the impact that it is having. And Anna mackenzhu is
the vice president of global affairs at OpenAI, and we really are thrilled that
she's here with us because clearly, OpenAI, along with the other companies,
you know, it's forging new ground and a lot of it is very exciting. And frankly,
Anna, a lot of it's very concerning. So part of what we want to do is help sort
that out, particularly as it does possibly affect elections. So Michael, let me
start with you because as I said, you really were on the front lines when you
were at the department of Homeland Security. And leading efforts to
understand and prevent the use of the internet at that point to provide outlets
for extremism and the radicalization of people.

You know, and now I think there's legitimate concern about hostel foreign
state actors, not just Russia, there are others who are getting into the game.
Why not it look like at work, so, you know, join the crowd. But we're now
worried that they will use artificial intelligence to interfere in our elections this
year. Can you explain for not just our audience here, but the people who are
watching the livestream. You know, both the downsides as how AI can be used
by our adversaries, but also what can we do to protect checked ourselves.
Hey.

Thank you. And thank you again for reading this, secretary. So let me say, I
mean, I think in this day and age, we have to regard the internet and
information as a domain of conflict. Actually, if you go back historically, even
100 years, it's always been true and that our adversaries have attempted to
use propaganda and false information to manipulate us. But the tools they
had were relatively primitive. What artificial intelligence has done is equip
people to have tools that can be much more effective. With respect to the
information domain. We've talked a little bit about deep fakes and the ability
to have simulated video and audio that looks real and unlike Photoshop or
some of the things some of us remember from years ago, this has gotten to
the point that it's very difficult, if not impossible, for an ordinary human being
to tell the difference.

But I would actually argue that artificial intelligence has capabilities and risks
that go beyond that. What artificial intelligence allows an information warrior
to do is to have very targeted misinformation. And at the same time, and it's
not a contradiction, to do that at scale, meaning you do it to hundreds of
thousands, maybe even millions of people. What do I mean by that? In the old
days, again, 10,20 years ago, if you sent out a message that was incendiary,
you affected and positively maybe induced and belief by some people, but a
lot of other people would look at it and go, oh, this is terrible. And it would
repel them. So that was an inhibiting factor in terms of how extreme your
public statements were. But now you can send a statement tailored to each
individual viewer or listener that appeals only to them and nobody else is
gonna see it. Moreover, you may send it under the identity of someone who is
known and trusted by the recipient, even though that is also for. So you have
the ability to really send a curated message that will not influence others in a
negative way. And the reason I say it's at scale, you can do it millions of times
because that's what artificial intelligence does.

So I think that has created a much more effective weapon for information
warfare now in the context of the election in particular, what are we, I think,
worried about? Well, I think obviously, one experience we had, we saw this in
2016 with Russians assisting the Trump campaign is there can be an effort to
skew the votes to a particular candidate or against the candidate, and we've
seen that now. We saw that with Macron in France in 2017. We've seen it in
the EU and in other parts of the world. But I would actually argue that this
year, we're facing something that, in my view, is even more dangerous. And
that is it will be an effort to discredit the entire system of elections and
democracy. You know, we had a defeated candidate who I won't mention their
name, who was talked about a rigged election. Now, imagine that for the
people who are an audience for that, they will start to see videos or audios
that look like your persuasively examples of rigged elections.

Now it's like pouring gasoline on a fire and we could have another January 6th.
And I understand that the reason our adversaries like this is because more
than anything else, they want to undermine our unity of effort and our
democracy. And in a world in which we can't trust anything and we can't
believe in truth, we can't have a democracy. And that's, I think, gonna lead
you with third consequence, which will be very dangerous.

We're talking about how do you distinguish and teach people to distinguish
deep fakes from real things. And the idea being we don't wanna have them
misled by the deep fakes. But I worry about the reverse. In a world in which
people have been told about deep fakes, do they say everything's a deep
fake, therefore even real evidence of bad behavior has to be dismissed? And
then that really gives a license to autocrats and corrupt government leaders
to do whatever they want. So how do we help counteract that? Well, I mean,
there's some technological tools. For example, there is now an effort to do
watermarking a video and audio where a genuine video or audio, when it's
created, has an encrypted Mark such that anybody who looks at it can
validate that it is real and it's not fake.

More than that, we've got to teach people about critical thinking and
evaluation so they can cross check. That when you get a story that appears to
stand alone, look to see what are the other stories, is anybody else picking it
up? And we need to actually establish trusted voices that are deliberately very
careful and very scientific about the way they validate and test things. And
finally, I think we've got to teach even in the schools, and this is gonna start
with kids, critical thinking and values, what it is that we care about and why
truth matters, why honor matters, why ethics matters. And then to have them
bring that into the way they read and look at things that occur online, this is
not going to be an easy task, but I do think we need to engage everybody in
this process, not just people who are professionals and make it part of the
mandate for civil society over the next year or two.

Thank you so much, Michael. That was incredibly helpful. Laying the, you
know, the groundwork for what we need to be thinking about. So, Dara, what
is the Federal Election Commission doing to try to set up some of those
guardrails on AI fueled disinformation ahead of the 2024 elections.

Well, thank you for having me. First of all, it's an honor to be a part of this
really important discussion. So to your question, the short answer is that the
FEC is fairly limited in what it can do in this space. But there is hope on the
horizon and there are different ways that things are developing.

So just to lay the baseline, despite the name, the Federal Election Commission
really only regulates the campaign finance laws and federal elections. So the
money in, money out and transparency there. But last year, we received a
petition for rule making asking us to essentially clarify that our fraudulent
misrepresentation regulations include artificial intelligence and really deep
fakes. And we are in the petition process right now to determine if we should
amend our regulations, if we can amend our regulations. And is there a role
for the FEC in these campaign finance regulations in this space that our
language is pretty clear and very narrow. So even if we can regulate here, it's
really only a candidate on candidate bad action. So if one candidate does
something to another candidate, that is all that we could possibly cover
because of our statutes. And that's unless Congress expands that.

But all is not lost. And there are some pretty great things that have come out
of this. And one is what happened during our petition process. We receive
thousands of comments from the public and from many other institutional
actors, including a lot of the, you know, smaller tech companies and
organizations that don't often have a seat at the table. But here it was in
really an open forum for them to bring their ideas to light. These comments
were insightful. They were creative. And I, it is my hope that Congress and
states and others looking at this will read all of these comments as they try to
come up with possible creative solutions here. And in addition, Congress could
expand our limited jurisdiction. I, if you asked me 3,4 years ago if there was
any chance Congress would regulate in the campaign space and really come
to a bipartisan agreement, I would have laughed. But it's pretty incredible to
watch the widespread fear over what can happen here. And we had an
oversight, I'm hearing recently where members on both sides of the aisle were
expressing real concern. And while I don't think anything is gonna happen
ahead of November, I see changes coming.

And there's bipartisan discussion. Senator Klobuchar is leading this, Senator


Warner. And they're thinking about ways that they're, they can do something.
These are really on the, really on the deep fake space. It's not the
misinformation, disinformation that's underneath it all, but this discussion of
AI and how AI is so at the forefront of everything that we're discussing in this
country. I think it has brought more to light this misinformation, disinformation
and the ways that the information gets disseminated, that it is bringing that
discussion out. So things could change. I'm helpful.

Well, I really appreciate your talking about that, Dara, because a lot of people
say, well, who oversees elections? Who tries to make sure that our elections,
you know, don't go off the rails? And we don't have a lot of these problems.
And as you just heard, it's not the Federal Election Commission, their mandate
is narrow and they try to, you know, make sure people who are contributing to
elections have the right to do so and candidates are spending appropriately so
much of the work about regulating elections is done at the states in our
country. And we're so fortunate to have Jocelyn here because as I said in
introducing her, she really has been at the forefront of trying to figure out how
to protect our elections to make sure they have integrity. And Michigan has
recently try to regulate artificial intelligence. And I want you to tell us about
that elect, that legislation and any other actions that you are taking on behalf
of your state and that you know other states are taking?

But maybe just start, Jocelyn, with just a quick introduction of what you've
been facing. You were elected in, what, 20,18,2018. And, you know, if you
remember pictures of armed men storming the capital because they didn't
like what the governor was doing about Covid. And Michigan was at the real
center of all of the, you know, crazy theories that were put 4th and 2020
about the election. Give us just a quick overview and then tell us about what
your regulation intends to do and what else needs to be done.

Thank you. And thank you, Secretary Clinton, for inviting me to be part of this
really important conversation. To me, we cannot protect the security of our
elections if we don't take seriously the threat that artificial intelligence poses
to our ability as election officials to simply ensure every vote is counted and
every voice is heard and that citizens have confidence in their democracy and
in their voice and in their votes. And that's our goal in Michigan and in several
other states all around the country.

We are coming off of being in the spotlight in 2020, rising to that occasion, but
also seeing very clearly in living very clearly when people with guns showed
up outside my home in dark night in December and I'm inside with my four
year old son trying to keep us safe. And that's real. And they showed up there
just like they showed up at the capital on January 6, because they've been lied
to and Fed misinformation. And now we're facing an election cycle where
those lies will be turbo charged through AI. And we have to empower citizens
to stand with us and not being fooled and pushing back on that
misinformation and those lies. And therein lies both our opportunity but also
the real challenge. How do we in a moment where the adversaries to
democracy are focused on sowing seeds of doubt, creating confusion and
chaos and fear in everything they do, and now have this new emerging
technology that is day by day essentially getting stronger and more, you
could perhaps say, effective at being poised to accomplishes, accomplishing
those goals of creating chaos, confusion and fear in our democracy, in our
voters minds.

How do we respond to that at the state level and throughout our country as
citizens? By giving each other certainty and confidence that our democracy
will stand just as it prevailed in 2020 and every time before and since. But
also that we can be equipped, every single one of us, to have clarity as to how
to respond when we get this misinformation. So in Michigan, first, we set up
the guardrails and several other states have done this, too. And we do hope
the federal government joins us in banning the deceptive use, intentionally
deceptive use of artificial intelligence to confuse people about candidates,
their positions or how to vote or where to vote or anything regarding
elections. So we've drawn a line in the sand. It's a crime to intentionally
disseminate through the use of AI deceptive information about our elections.

Secondly, we've required the disclaimers and disclosure of any type of


information generated by artificial intelligence that's focused on elections. So
for example, one of the things we're worried about is, and we know because
of AI, it could be targeted to a citizen on their phone getting a text saying
here's the address of your polling place on Election Day. Don't go there
because there's been a shooting and stay tuned for more information. That's
gonna invoke fear. Again, goal is fear, right? It's gonna invoke fear in a citizen
with the disclaimer and disclosure in place requirement, they have to be
disclosed. This has been generated by artificial intelligence. It's still not
sufficient, but it is a key piece of enabling us to push back. But the other side
of that is we need to equip that citizen when they receive that text, to be fully
aware as a critical consumer of information as to what to do, where to go, how
to validate it, where are the trusted voices. So in addition to passing these
laws, we are setting up voter confidence councils, building out these trusted
voices so that faith leaders, business leaders, labor leaders, community
leaders, sports leaders, education leaders can be poised, even mayors and
local election officials to be aware and push back with trusted information.
And so it's layers of, upon layers of both legal protections and partnerships to
equip our citizens with the tools they need to be critical consumers of
information.

And then in everything we do between now and in every election, but


certainly leading up to November, helping to communicate to every room
we're in that it's on all of us to protect each other from the threat of AI. In
regards to our elections and in many other spaces as well. And while we, as
officials, will be working to do that, we're also trying to communicate to
citizens this is a moment that's gonna define our country four years to come.
And we all have a responsibility in this moment to making sure we're not
fooled, our neighbors aren't fooled, our colleagues and friends aren't fooled.
And equipping all of us with the tools we need to push back and speak the
truth, value that honor and integrity and help define our country moving
forward based and rooted in those values.

Well, I am a huge fan of what you and your attorney general and your
governor have been doing. And I think it would be great if you could get some
help to model this. And I'm hoping maybe some tech company or some
foundation, we'll talk to you afterwards because we need to show this can
work.

I saw Michael nodding his head. I mean, we've gotta get, you know, if this is a
fight against disinformation, we have to try to put up guardrails, but we also
have to flood the zone with the right information to counter the negativity that
is out there. So I hope you can implement that and we can then all learn from
it. Because it's gonna be not a problem that goes away after the selection.

So, Anna, you've been sitting here, you've been sitting through the first panel
now you've heard our other panelists and you're, you know, truly at the center
of this because, you know, at ChatGPT, you all are moving faster than
anybody can even imagine. Sometimes I think probably yourselves about
what it is your creating and the impact that it will have. And this is obviously
the ground zero year. This is the year of the biggest elections around the
world since the rise of AI technologies like chat, g, GPT.

And so can I ask you, do you agree what you've heard from the panelists
about the dangers? But then tell us what you're doing at chat Jeep at OpenAI
to try to help safeguard elections. You know, give us your assessment. Are we
overstating it? Are we understating it? And what can be done and how can
you help us do it?

So I think what's been really interesting to me, listening to your first panel and
to my co panelists here is that so many of the ideas and the concerns we are
already integrating into technology. So if I could just say that the one piece of
good news is that unlike previous selections, none of us are coming into the
on in terms of the tech companies, election officials, even the public and the
press. We're not coming into this unprepared.

You know, this is especially true for me because I was actually working at the
White House on the Russia portfolio in 2016. So this has been top of mind for
me from day one in the job. But OpenAI, you know, a relatively young
company. This is something that's been top of mind for us for years. In fact,
GPT two, which was several years ago and, you know, quite, you know,
embarrassing compared to what exists now, at the time, it was state of the
art. It could produce paragraphs that were text like a human could write. And
even then we thought, oh, well, like the possibility for this to be used to
interfere with the democracies and electoral process is very significant. And
so we made a decision then not to open source it. And it was quite
controversial in the research community, but it was because we have this in
mind.

So, you know, that is, you know, ahead of 2016, we were not having panels
like this. And so I think in general, we are much more prepared as a society
and we are working together, you know, opening eyes, working with the
National Association of the secretaries of States and with social media
companies, because one key thing to remember is that this, there is a real
distinction. So we are not dealing with the same kinds of issues at AI
companies, we are responsible for generating, or what we do is generate AI
content rather than distributed, but we need to be working across that chain.
And, but in terms of, you know, of course, as many have mentioned here and
as I hear almost in every interaction with policymakers, deep fakes are a very
serious concern. And so for us, we have Dali, which is an image generator.
And we've just ban, you know, we do not allow to generate images of real
people, period, but in particular politicians.

And now we are implementing something called c two PA, which is a digital
signature. And the great thing about CGPA is that it's not just AI companies.
You know, this is the New York Times and Nikon and, you know, the BBC. So
it's going to be an ecosystem where there's actually a tool across the
ecosystem that's going to work to help journalists and social media companies
identify of a piece of content is generated by AI. Obviously, this is not, you
know, a complete solution, but this is not, you know, this was not the case a
year ago. So already we are much more advanced in our ability, as you know,
the entire ecosystem to deal with these issues.

But we also have, you know, threat investigators. We recently just took down
a bunch of state actors who were using our tools. So we, and so these two
pieces of cooperation across all of the players and all of the state of the arts
interventions that we are building, I think mean that for, you know, right now,
the kind of thing that you describe section, sure, tough. It's not possible with
opening items. We, you cannot connect them to a chatbot to sort of skews
information and target it at voters. But, you know, we're constantly evaluating
what other kinds of threats does this technology create that are novel. And I
would just kind of wrap up with, there is, of course, I do have optimism,
otherwise, I wouldn't be working at OpenAI. One of the things that these tools
have the potential to do is create access to education for new segments of
society. And so, you know, there's a potential, these tools to actually help
create a citizenry that is more educated and more aware, which I think is a
really key aspect to a healthy democracy, and it can be used, you know, for
secretary of state that are incredibly busy in back offices. And it is a bit of a
race between the positive applications of these technologies are negative
ones. So I am, you know, this is why it's so fantastic that, for example, the EO.
By President Biden really works to strike that balance.

Well, we have only a few minutes left, but I just wanna ask each of the
panelists, you know, what steps can governments, obviously national, state,
local in our country, the private sector, companies, particularly the tech
companies, the AI companies, the plat forms, nonprofits, anyone that you
think of as to what they could and should do to try to, number one, ensure the
integrity of this upcoming election, but then for the longer term, what are the
changes we need? And can you start with that?

I think it goes back to what I already mentioned, which is that really close
collaboration with a AI companies, social media companies, election officials,
civil society, really working together to address this problem and sharing best
practices and sharing knowledge. Because if this is a whole of society problem
and no single actor is going to be able to be fully effective in solving it care. I
think the.

Education component and pushing to find trusted sources is key. The


technology is gonna change. There's gonna be a new technology.

In the future. No matter what the government does.

Or what the tech companies do, we need to strengthen the trust that people
can build and entrusted, you know, forms of news. And I think we're seeing
some of that starting to change. I go.

I would say in addition to those suggestions, information sharing, when there


is an indication that something is coming that's part of a wave of
disinformation to share information among all the stakeholders, including
federal and state and the public is very important.

Now I want to say, I know that there's some litigation now where some states
have tried to make it illegal for the government to share information about
this information with the platforms because they argue that that's censorship.
I personally think that's nonsense. I think what you're doing it is giving
information that's helpful and not doing anything that's harmful.

Yeah, I agree. I think, and particularly for philanthropy and foundations to


really invest in entities and partnerships that are focusing on this education
and sharing of information and building more collaborative partnerships and
teamwork around this. All of that, I think, has to and be the foundation for
every entity making their first priority protecting citizens from the ways in
which AI can be negatively used to harm their own voice and their votes in a
democracy. And to recognize that our adversaries to democracy have figured
out how to divide us and demobilize us and deter us from believing in our
voice through the use of AI. And so our response needs to be similarly
collaborative, national in scope and focused on empowering citizens and
partners all across every arena and sector, tech and beyond, to be a part of
the pushback and the Protection of our citizenry from this threat to our
democracy.

Well, I can't thank the four of you enough. And maybe out of this panel will
come that kind of cooperation. Let's try it out. Let's see. Let's get, you know,
OpenAI, Facebook, others together with people like Jocelyn and Michael who
have, you know, a lot of depth and, you know, what Darren knows from, you
know, the, where the money flows are that she sees. And let's see if there
can't be some cooperative effort between now in this election. If we don't try,
we know what's gonna happen. And I think that, Michael, you made a great
point. We need more transparency and openness, you know, and that should
be declassifying information as quickly as possible so it gets out in the public.
And frankly, governments need to get it out and not ask for permission
because it can influence the conversation going forward. But I think this idea
of collaboration, it's always better in a democracy to have collaboration, bring
people together. So let's see what we can do to follow up. Thank you all very
much.

There will now be a short break in the program. We will resume at two fifty PM
thank you.

Hi, everybody. Boy, those were two fabulous panels. I hope you agree. And of
course, there were many mentions of those darn tech companies. So now we
have the tech companies up here and boy, are we gonna grill them. Sorry.

Guys are terrible. I.

Know. No, we will actually, what I wanna start, well, let me first of all, let me
introduce our panelists. We have at the end David Igranovich, who is the
director of global threat disruption at Meta, Yasmin Green, who is the CEO of
Jigsaw. Jigsaw is a unit of Google that addresses threats to open society. And
Clint Watts, who leads Microsoft's threat analysis center, which is part of
Customer safety and trust.

So where I want to begin actually with each of you, because all three of you in
slightly different ways, are focused on looking at the threats and the risks that
you're seeing across your platforms or across society, I think maybe even
more so in your case, Yasmin. So I want to begin by you sharing with us what
you're seeing. And again, particularly with relationship to the use of AI when it
comes to information deception or other forms of AI deception. And Yasmin,
I'm gonna begin with you. Jigsaws a little bit of a different animal here
because you really are looking at societal changes and what kind of
interventions you can make, not even necessarily via your platforms to affect
changes. So give us a little bit of a sense of what you're saying.
Okay, what I wanted, hello, everyone. I wanted to actually pick, I think the
panels before did such a good job of surveying the landscape, including the
threat. So I wanted to get a bit specific and build on what was said. So there
was, we talked about trust in the last panel, and one of our observations
about the trust landscape is not that we are in a post trust era, cuz as
humans, we have trusteuristics. We have to make decisions. We have to
evaluate things. But it's not that trust is evaporated, it's that it's migrated. So
trust is much less institutional and much more social. And I think that's really
important as we think about the risks posed by generative AI.

So we did an ethnographic study with Gen Z to figure out how young people
going about, you know, what trust history heuristics they have online and how
they go about evaluating information.

So I wanted just a survey of this room. And I think we have a good


generational mix here. But just by show of hands, how many people here read
the comment sense underneath news articles? You say that's about maybe
half, 2/3. I gotta tell you, I don't read the comments. I thought like our
collective coping mechanism for the internet is that we don't read the
comments. Okay, so, well, I'll tell you, he reads the comments, Gen. Z, Maria.
And the interesting thing is not that they read them as much as when and
why. When do they read the comments? They often go headline. Well, I've got
an understudy here, if anything, which I appreciate. And I love you. No one
outside rather speak the words from my mouth. And Marissa, but headline
comments and then the article, why would they be doing it in that order?
Because, and this is according to them in this research that we did, they want
to know if the article is fake news. So you see the inversion there. Like I would
see the article as the journalist being, you know, the authoritative curators of
information and they are interviewing experts or authorities and Gen Z. And I
think in increasingly we're going to the social spaces to look for signal. And we
kind of throw out the term information literacy and we we don't instead of like
information sensibility, they're looking for social signals about how to situate
the kind of the information, the claims and the relevance to them. So it's like,
you know, we famously had the term alternative facts. This is alternative fact
checking, you know, and we should be really concerned. And it's relevant to
generative AI because one of the things that we maybe emphasize less than
we should because it's a threat that's coming around the corner.

In addition to synthetic content, we have synthetic accounts. We have


accounts that are going to be, we talked about this idea, but you know, these
human presenting chatbots. And what we're seeing, like one of the products
that we offer, jigsaw, is the most popular free tool for moderating comment
spaces. So we have billions of comments every day that go through our
thousand partners. And we hear about synthetic accounts that are there and
posting. They're not sending you crypto, they're not spreading disinformation.
They are active. What are they doing? They're building a history of human like
behavior. Because in the future, yes, it's gonna be really important for us to
evaluate wherever we can to do detection, to evaluate whether something's a
deep fake. But when there's a deep fake, where do you think people are
gonna go to check? They're gonna go to other people in the social spaces for
signal? So we need to invest in humans and also invest in ensuring that the
human presenting chatbots are not, do not have an equal share of influence
there.

Fascinating. So synthetic identities, not just synthetic content. Yeah,


fascinating. Clint, so you, I've known you for a number of years. You've been
now with Microsoft for, what, two.

Almost.

Two years. But you have been sort of doing this kind of deep digital forensics
for quite a long time. So you've seen that sort of trajectory of history, how
we've seen scenes, things, seen things evolve since, you know, 2,016, prior to
2,016 until now. So give us a little sense of what, what you, what change you
have seen, particularly since generative AI has sort of taken off in the, in the
wake of the launch of ChatGPT and what risks you're seeing today?

Yeah, so it's, I, it's interesting in terms of timing. It was 10 years and two
months ago that we encountered our first Russian account and impersonating
American that would later go after the election. I'm sorry. We were trying, you
know, and we're working from our house and we use the tool called Microsoft
Excel, which is incredible if you've ever checked it out. Now we use Microsoft.
So that's a major change in 10 years. And what's interesting is in 2014,15,16,
it was testing it on the Russian population. First, Ukraine, Syria, Libya, it was
battlefields. And then it was taking it on the road to all the European elections
and the US election. And so watching what has transpired in that, there's
often a little bit of a misunderstanding about how much things have changed
in 10 years in terms of social media. Speaking of Gen Z, Chenzi, would you
read more than 200 words? I bet you would watch 200 videos. So that's one of
the biggest changes in 10 years with the technology. And that's not just about
Gen Z, that's, it's about my generation, everybody older. Video is king today.
And if you're trying to influence by writing a hot 9,000 word blog, you're
running uphill like with a lot of weight on your back. So, you know, yeah, our
monitoring list in 2016 were Twitter or Facebook accounts linking to Blog Spot.
In 2020, it was Twitter or Facebook, a few other platforms, but mostly linking
to YouTube. And today, if you go to it, it's gonna be all video, any monitoring
list to any thread actor.
So my team tracks Russia, Ron, China, worldwide. We've got 30 on the team.
We do 15 languages amongst the analyst and we're mostly based here in New
York. And nine months ago, we did a dedicated focus on what are the a what
is the AI usage by these thread actors. And so we have some results of our
research so far. And what I would say in 2024, there will be fakes, some will be
deep, most will be shallow, and the simplest manipulations will travel the
furthest on the internet. So in the just the last few months, the most effective
technique that's been used by Russian actors has been posting a picture and
putting a real news organization's logo on that picture. I'm sure, David, he'll
be able to tell you more about this distributing across the internet that gets
millions of shares of views. There have been several deep fake videos in and
around Russia, Ukraine and some elections, and they haven't gone very far
and they're not great yet. This will all change.

Remember, this is March, so things are moving very quickly. So what I would
note is just looking at a few things, there are five distinct sort of things to look
at. One is the setting. Is it public versus private? In public settings, and I
would love David's take on this. When you see a deep fake video go out,
crowds are pretty good collectively. It's saying, Nana, we've seen that video
before. I've seen that background. He didn't say this, she didn't say this.
We've seen Putin, Zelensky deep fakes and the crowd will throw the real video
out and it kind of dies.

The place to worry is private settings. When people are isolated, they tend to
believe things they wouldn't normally believe. One, anybody remember Covid
when we were all at our house? It was very easy to distribute all sorts of
information. It was hard to know what was true or false or what to believe.
And people had totally different exceptions of the pandemic.

The second part is in terms of the AI, the medium matters tremendously.
Video is the hardest to make. Text is the easiest. Text is hard to get people to
pay attention to video. People like to watch. Audio is the one we should be
worried about. AI audio is easier to create cuz your data set is smaller and you
can make that on a lot more people. It takes a much smaller data set and you
can put it out and there's no contextual clues for the audience to really
evaluate. So when you watch a defect video, you go something, I know how
that person walks. I've seen how they talk. That's not quite how it is. Audio,
you'll give it a discount. You'll say, yeah, on the phone, maybe they do sound
like that or that's kind of garbled, but maybe that is where to look at. We've
seen that in the Slovak elections. We've seen that with the robo calls around
President Biden. Indonesia, we've seen these sort of examples. There was a
deep fake video that used Tom Cruz's AI voice. He's probably the most faked
person, both video and audio around the world. That's tougher to do. And that
kind of comes to the other thing to look for is there's an intense focus on fully
synthetic AI content. The most effective stuff is real, a little bit of fake and
then real blending it in to change it just a little bit. That's hard to fact check.
It's tough to like chase after. So when you're looking at it, private settings and
audio with a mix of real and fake, that is, that's a powerful tool that can be
used.

A couple of other things sort of to think about is the context and the timing.
Many of you probably saw information was totally incorrect about the
Baltimore Bridge tragedy this week. Right. People immediately, you know,
rush to things and when you're feared or there's something you've never seen
before, you tend to believe things that you would normally believe. So
imagine it's a super contentious event or there's some sort of an accident or a
tragedy. AI employed in that way can be much more powerful a tool to do
that, you have to have staffing, you have to have people, you have to know
the technology, you have to have cute and you have to have capacity. That's
not a guy in his basement on the other side of the river, folks. That is a well
organized organization with technology that has the infrastructure to do that
and is ready to run on something instantly, IE, the Russian disinformation
system, which doesn't hire just about 10 or 20 people. We're talking about
thousands of people that are working this nonstop and around the clock. And
as we know, in all of the governments around the world, there are just
thousands of people working to counter disinformation day in and day out,
right? We stay up till 2 in the morning watching the. We're just not set up the
same way. And so that gives them a strategic advantage. Ten years ago, we
were tracking two activity sets of Russia that ultimately went for 2016. Today,
my team tracks 70 activity sets tied to Russia. So that just tells you in terms
of the scale worldwide and the way things are going, that's something to look
for.

The last thing to think about is knowledge of the target. And Secretary Turdop
brought up a great point, is if people know the target well, they're better at
deciding whether something is fake or not if you've seen it over and over
again. But if you don't know the target well or the context well, they are not
as good at it. So there's always the presidential candidate. There'll be a deep
fake and it will change the world and make her heads explode. Probably not.
But if it's a person who's working at election spot somewhere out in a state
and a deep fake is made, or maybe they're not even a real person.

It's these contextual situations that we have to be prepared for in terms of


response. So our team is setting up. I, we work with Google and meta. And I
would just tell you is my experience being on the outside of tech and now
being in 10 years ago, when I notified tech companies about the Russians
going after the election, they told me I was an and that no one would believe
that. Now I work at a tech company and we do exchanges all the time. So I
would just like to point out, I feel like we got great relationships, Yasmin and
David. We've worked together for years, you know, on different projects. So I
think that's something else that's quite a bit different today.

That's great. Thanks, Clinton. And David, I want you to sort of pick up where
Clint is leaving off. Obviously, any additional context that you can provide in
addition to what he said about what you're seeing out there. But then I also
want you to address something, pick up on something that Clinton mentioned,
which is it's one thing when it's, you know, a big splashy, deep fake that's, you
know, all over public forums. So those can easily debunk. And I agree with
you, the big spectacular deep fake of one of the major presidential candidates
is unlikely to have a huge impact. But the stuff that you can't see because it's
on messaging platforms, that's what we worried about. So talk about what
you're seeing there.

Absolutely. And I think building a bit on what Clint had mentioned around what
we're seeing from threat actors around the world, so our teams have taken
down now from a little over 250 different influence operations around the
world, including those from Russia, China, Iran, but also a number of domestic
campaigns from countries all over the world.

Well, maybe the key three things that we're seeing in addition to the transit,
Clint mentioned, 1, these are increasingly cross platform, cross internet
operations. The days of a network of fake accounts on Facebook and network
of fake accounts on Twitter, somewhat, you know, closed ecosystems are
gone. Right now, I think the largest number we've ever seen is 300 different
platforms implicated in a single operation from Russia, including local forum
websites, things like next door, but like for your neighborhood, as well as
more than 5,000 just web domains used by a single Russian operation called
doppelganger that we reported on last quarter. So what that means is the
responsibility for countering these operations is also significantly more diffuse,
right? Platform companies don't just have a responsibility to protect people on
their platform forms like the work that our teams do, but also to share
information. I think Secretary Turnoff mentioned this in the last panel, not just
sharing information amongst the different platforms that are effective, but
with civil society groups and with government organizations that can take
meaningful action in their own domains.

The second big trend, I think, that we've generally been seeing is that these
operations are increasingly domestic and increasingly commercialized. So
there are commercial actors who sell capabilities to do coordinate what we
call coordinated inauthentic behavior, disinformation for hire, something in a
Maria's organization written a lot about the Philippines in the
commercialization of these tools democratizes access to sophisticated
capabilities that used to be part basically nation state capabilities, and it
conceals the people that pay for them. It makes it a lot harder to hold the
thread actor accountable by making it harder for teams like ours or teams in
government to figure out who's behind it.

And then the third piece is that we're increasing, so to the use of AI is that
much like Clint mentioned, I think we've generally seen AI, I would say cheap
fake, so shallow fakes or not even AI enabled, but just things like photoshops
or repurposed content from other events, mainly being used by those
sophisticated threat actors, Russia, China, Iran. But where we do see AI
enabled, things like deep fakes or text generation being used is by scammers
and spammers. Now that's not to downplay the threat. Scammers and
spammers are arguably some of the most innovative people in the online
threat. They move the fastest. They're the least responsive to external
pressure, cuz I just wanna make money. And they often are in jurisdictions
that aren't gonna do anything about them. What I, what we should all be alert
to is the tactics and techniques that those scammers and spammers use
being adopted by more sophisticated actors over time. So if you want to look
to see what's coming, that's where I would be looking to see where things are
coming now, what can be done about it, what's working, what isn't working,
especially some of the examples you used, some of these AI enabled
capabilities being used in smaller, more private settings. This is where things
like some of the watermarking, and again, by watermarking here, I mean
more what Anna Makanji was talking about. So technical steganographic
watermarking that can't be easily removed to identify whether content is
authentic or was created by an AI system can be perpetuated by social media
platforms, right? So if a company that produces AI content, which meta is one
of those, is willing to be part of that coalition, make sure anything that our
models produce is discoverable as AI generated. Then when it shows up on
Twitter or shows up on our own platforms or shows up on Snapchat, it should
carry through those standards.

And so there was compact at Munich amongst many of the tech companies.
Microsoft was part of that as well, Google is part of that. The more we can
raise the bar across the industry to require companies to be building in these
capabilities early before we get to the point where the bad things have
already happened, the more we can actually build meaningful defenses.

One thing from the last panel that really stuck with me was, so when Anna
was at the White House dealing with Russia policy, I was in the US
government on the security side also dealing with Russia policy. And we were
chasing after the problem at that point, right? It had left the station. We have
an opportunity now to start building these safeguards in as this technology is
taking off. So I'm happy we're having this conversation now. And I think
everyone who pulled this together, cuz it's an incredibly timely time for us to
be building this.

Thanks. I wanna just stick with you just for a second, David, and talk a little bit
about, go a little bit deeper on messaging platform. So of course, meta owns
one of the most used, significant, largest private encrypted messaging
platforms in the world, which is WhatsApp. So much of what we know is that
it's traveling. That could be these kind of synthetic messages no matter what
form factor they are, video or text or images or audio, travel through
WhatsApp. Can you, how do you think about ensuring that those platforms do
not become vectors for this kind of harmful synthetic content around
elections? And what are you doing about that? And also about the open parts
of WhatsApp as well.

Absolutely. There's some really exciting, I think, integration between some of


the technical standards that we've talked about, things like steganographic
watermarking that can be programmatically carried through on platforms and
in ensuring that robust and reliable encryption remains in place for people all
over the world so that their communications can't be spied on by
governments, particularly in authoritarian regimes. So one of, there's, I think,
two different tool sets here. One is ensuring that as platforms, whether it's
WhatsApp or signal or anyone else who's building these point to point
communication tools that we're building in, tools for the people who use the
platform to identify and report problematic content, things scam and spams,
but also things like disinformation. And also that we're building in technologies
as the industry uptakes more of the safeguards around AI systems that can be
programmatically propagated in our own software without rely needing to
break fundamental encryption, right?

So you can imagine a future where we can get all of these companies that
produce AI images or AI generated text to sign up to watermarking standards.
And if that content ends up being sent through one of our platforms, that the
watermark can be carried through without having to have someone in the
middle saying, oh, that right there, that's AI generated.

And so I think that's actually one of the several reasons why some of these
technology standards are so important. It can hopefully be enshrine not just in
industry agreements, but also in many of the regulatory conversations that
are happening. Because there is a world in which we can, I think it's really
important to retain fundamental encryption standards while still making sure
that we are doing our due diligence and our responsibility to protect the
broader information environment.

Well, certainly, though, there are things that meta can do to keep these kind
of messages from going viral, even while protecting encryption. Yasmin, talk a
little bit. I'm gonna ask you both to talk a little bit about what Google is doing
to eliminate the risks and stop the spread of AI generated election misleading
information.

Yeah, I'll quickly just this idea of at the, you know, at the origination of the
content, trying to kind of stamp it in a way that is enduring so that it can be
identified as synthetic is really important, and that's ongoing work.

One of the things that I think is interesting, actually, is that it's kind of actually
just refusing to provide the Jenni service when the stakes are as high as they
are when they're election queries. So now it's kind of like a, it's a new, it's, it's,
you know, in a lot of people kind of understand intuitively that there's a
tension for technology companies and wanting to make the experience for the
user safe, but not creating so much friction that they don't want to use the
product. So it's interesting, for example, now if you go to Google's generative
AI product, which is Gemini, and you search for something election related, it
will give you a non answer, which is actually pretty crappy feeling, you know,
but they send you to search instead. They say go to search and there's
research by person research that shows that people want an authoritative
source. I think this interesting thinking about this tension between authority
and authenticity, you know, like those are the mental models that we have
from the last decade of search and social media. It's like if it's coming from an
institution that I trust or even Google search, you know, I'm, I'm, there's a lot
of trust there. So the stakes are really high. You better get it right. Or for social
media, if it's coming from my friend, they're in my social network, then I trust
them.

Of course, generative I AI isn't neither of those. It's not authoritative. It's not
summarizing what the internet says and giving you this distillation of
something that's authoritative. And it's also sounds like a human, but it's not a
human that you know. So I think we're in an it. We don't have mental models
to deal with generative AI output. And at the moment, you know, I think it's an
interesting demonstration of a commitment to trying to put addiction integrity
first is actually giving users a pretty bad experience of the gems.

Of AI. So you're defaulting to sending people to search, which gives it more


relied. While you're still sorting this out, we are quickly running out of time. So
yeah, Clint, just tell us what Microsoft is doing. We're.

Gonna get him this time. It's.

Gonna get up by me.


Look.

The okay are working just.

Done. Just conceptually, the rushing concept of reflection control, if you're


familiar with it, is you conduct an attack on your adversary and then they
attack themselves in response. That's somewhat what has happened over the
last 10 years. They're winning through the force of politics rather than the
politics of force. There are more than three nation states that will probably do
some sort of election influence and interference. My team is designed to focus
on the action.

Russia, Ron, China, you know, absolutely. You'll see that in our November
report. We have another report coming out that's election focus. On this one, I
think the key point is that you have to raise the cost on the adversary at some
point rather than raising the cost on yourself to function as a democracy. And
so there are lots of things we can do in policy and tech, and we inform those
at Microsoft, and we do data exchanges amongst themselves. But ultimately,
we've got to say there's a hack here, there's a leak here, and it's coming, and
we're anticipating, we're gonna be out in front of it the next.

Time. So it's inoculating the public.

It's inoculating the public. It's also raising the cost for actors to do that.
Sometimes that is methods and platforms, you know, communicating. So we
include controls, but a lot of it is awareness, communicating to the European
governments, communicating to the US government. This is what we're
seeing because we can see it better oftentimes from the private sector than
the public sector. So.

You are sharing that information. We do.

If it's something that's impactful, our nation state notification system.

Okay, well, we are out of time. So thank you so much. We could have gone
much longer. Appreciate it.

Please welcome to the stage Columbia SIEPPA Professor Anya Shiferin. Okay.

Thanks everybody. It's so good to be here. I'm Anya Shifrin and I direct the
technology and media specialization here at SIPA, where we are all above. I
see a lot of our students are in the room. We've been talking all year about AI
and the elections and disinformation and it's really been fantastic to have the
secretary here as well as Ardine and Maria Ressa and so many of us were
involved. We've really been, yeah, so this builds very nicely on some of the
other events that we've had. I was lucky enough to get invited to Vivian
Schillers Aspen event in Miami in January. Tom Asher was there as well where
we laid out, it was incredible wake up call what the threats were. So we heard
a lot of what we've been hearing about today, that audio deep fake would be
a real threat in the US, that there would be certain pain points during the
elections, such as the counting would be really dangerous and or risky.

And then also Alandra Nelson and Julia Anglin worked with IGP and brought
together election officials from around the country. And they came here in
February to game out scenarios with Maria Ressa. And they were the kind of
people like Darrell and amount, they were so meeting them with so emotion,
national and so inspiring. And to hear the stories of the death threats and
everything else. So I'm really glad that for this panel, we're turning a little bit
to the international situation. And as we're all preparing for our classes in
December and January, we're reading about how, so 85 billion people were
going to have elections this year and dozens of countries. And, you know,
we'd better really watch out. The US is not just the only place where this is
happening. And so I was just thrilled that IGP decided to bring in some
international voices to this discussion cuz I think we have a lot to learn. So
we've got Ethan 2 from AI Labs in Taiwan. And of course, Taiwan has the
reputation for being really good at all of this, right? It's like, you know, you got
it. You had Audrey Tang during the Covid pandemic. You've got all that public
diplomacy. And we all know you're used to China. So we're looking forward to
your expertise. Javier Palero Millay just gotten elected in Argentina. So we're
gonna have to hear about how disinformation played into that or not. This is
obviously a country that's extremely polarized, so no surprise is there? And
this is the first time that I'm meeting Dominica Hajun. I think you're gonna be
bringing in the perspective actives from central and Eastern Europe. So I think
maybe I think your election was first Taiwan, right, Ethan? So maybe we'll
start with you. How did it all play out? Where did you have the same problems
that we keep hearing about from the other panelists?

Yes, so I can introduce my institute a little bit. So we're Taiwan and we're the
very first open air research institute in Asia, twen open. I say they are young.
We are a little bit younger than OpenAI. So what we do is we do the
transparency, responsible, trustworthy AI evaluation that including the
information manipulation. So for example, during the pandemic, we use the
artificial intelligence to know is there a true account on the internet is
manipulate the information against Taiwan. And during Taiwan presidal
election, we can observe billions of activity. I found online social media and
that including like Facebook, Twitter, PTT.

Is mainly a threat from China or internal too.


In we have PTT last internal Taiwan platform that's fun by me in 1995. And
Facebook, of course, Twitter is also one of the major platforms in Taiwan and
Taiwanese people also look into, for example, like Wechat, Twitter, TikTok.
That's a big topic recently. So in Taiwan, we also aver, we also observe the
information manipulation and this social media all cross board during the
election. There are a lot of chore activity, which means that is the Facebook
just mention is a collapsing inauthentic behavior. So if we use artificial
intelligence, we can identify those people actually they are not real human.
They disappear together, dissiminates of false information together, and they
like to like reference the video, like short video is a trending topic this year. In
the past, we can see a lot of information in text, but this year we see a lot of
short video. And short video have the short video and YouTube also have the
short video and TikTok. But usually the short video and YouTube was origin
from TikTok. That's like.

So just what we were hearing about cross platform. Yeah, and a lot of video
and audio as well. Could you tell us the source? Were you also able to track
down the source of this?

Yeah, so using the artificial intelligence stuff, so we know there's defects. So


like there are a lot of video, they have the same narratives, but they use they
use different background, different voice, but they're playing essential and
they flood into like YouTube than the TikTok platform, not try to influence how
people feel in Taiwan. So we use the artificial intelligence. So we use speed
recognition, language understanding, then by identify the troll can then we
know lost true, can they are not real human? Sure. Then we can caster the
story they are trying to spread. Yeah, so during the election, we can clearly
understand, for example, we know when was the destination force. Imagine
the budget is coming. For example, the budget was coming when the Taiwan
prison visit United State. That was the very first huge activity of the troll
activity happening. And then another peak is when Joe Biden said when
Taiwan is under trade, they were in those Taiwan. Then there are we see a
spec or information many parents is saying that United States is helping
Taiwan to develop by a weapon they try to destroy the narratives also.

Yeah, so this is basically China is the source. You're, I feel like.

You're. Yes, according to our understanding, there are a lot of they, they, they
will, the true account and social media, they will try to emphasize the metrics
rate and China. And if we, you look into the stay affiliate media, so we can
compare, we use artificial intelligence, we can group the same narrative
together. Then we can see the tool account on the Facebook, Twitter, for
example, they were echo less narratives. And China's day meeting, see how it.
Spreads. Oh, great. Well, very interesting that you're able to do that kind of
detection work. And I know you've put out some really interesting reports that
everybody can read who's interested, Dominic, I wanted to find out from you,
is this, does this sound like what you're seeing in your part of the world, sort
of video narratives kind of being spread out from state actors, audio, what's
the sort of state of play where you are?

Yeah, so also just to.

Introduce about your organization. Yeah, absolutely. So some more details to.

Introduce myself. So I come from globseg. It's a thing tank which was founded
and Central Europe. We were founded in Bratislava, Slovakia, but we covers
basically all countries in Central Eastern Europe. And we now have offices in
Kiev and Brussels and in DC and I am leading the center for democracy and
resilience. And we were founded in 2015, so right shortly after the annexation
of Crimea and the invasion of Ukraine, because we started seeing the floods
of information manipulation and disinformation cross central and Eastern
Europe, primarily coming from the Kremlin. And at that time, of course, this
was mostly limited to a few pages that spread procurement propaganda and it
was very visibly pro Kremlin or pro Russian. But then of course, as it has been
mentioned during the first panel, I think the tactics, of course, have evolved
tremendously. First of all, the, especially the Kremlin and the context of
central and Eastern Europe has been able to build the networks and the
proxies. Right. So right now, and the vice president mentioned it, of the
European Commission, is not that much about the Kremlin interfering directly,
but it's through the domestic actors, political actors, websites, social media
pages, etc. And I come from Slovakia and we had elections in September and
it was a peculiar case because we could actually see both direct and indirect
interventions from the Kremlin. Direct. A lot of the countries in the EU have
this political campaigning silence, which is, which means that from one day to
two days prior to the elections, you cannot do any campaigning basically. And
during this period, there was a press release by the Russian press agency
saying that the US was going to interfere in the elections by doing everything
they can to, for a pro democratic progressive party to win. And just that you're
aware how it was is that two parties, one rather nationalistic populist with
some very strong prokremlin figures was running first and just, and
progressive Liberal pro democracy party was running second. So this was
released and among a very similar time, a deep fake audio and a synthetic
made audio was also released on Telegram. By an account which was
probably a wife of a Slovak political representative who is currently being
prosecuted for spreading Russian war propaganda. So attribution in this case
is quite difficult. But this defect has spread through Telegram on Facebook and
has had thousands of shares on Facebook, despite fact that it was quite an
lousy audio that if you listen to it carefully, you would actually see that it
wasn't, that it wasn't true.

The problem is that despite the fact that it's quite a small case and quite a
small country, it draws several important lessons. First is that we really need
some red lines, clear lines to be made when it comes to generative AI prior to
the elections. Cuz I do agree that labeling is important and watermarking
labeling is a way to go, definitely. But what if there is a content spread 24
hours before the elections and it's made by Kremlin based or Beijing based
company, which doesn't require such watermarking because this is going to
be a consensus among the western based companies. Is there an ability to
stop this if this is 24 hours prior to the elections, are we going to ban it? Are
we going to take it down? I think that there are also some red lines that have
to be defined, and I think the Michigan law can be a, could be a way forward.

Second is that these measures also have to be clearly defined for social media
platforms. Because what has happened in Slovakia specifically is that there
were around 70 pieces of AI generated content identified and around half of
them stayed aligned, 15 were taken down and 13 were labeled or something
like that. So there is quite an inconsistency in treating these cases. And of
course, we need to treat some of the cases on, on a specific basis, whether
they, whether they can't, whether they are talking about election
manipulation or the narratives of rigged elections, which actually most of
these big fakes were talking about. So this is a common tactic that has been.

Said. I hope we'll have time to talk about regulation. But I know Javier was
saying, you know, we've obviously some of our alumni, a lot of people have
been very involved in tracking Russian disinformation in South America. And
I've heard, you know, I, you were saying, Javier, that actually that's not really
the problem in Argentina. So that's quite interesting. What is the problem? I
know Miley 1 with a lot of youth vote. I guess all the very high unemployment
and inflation has been upsetting people quite a bit. Exactly what's economics
and what's information, I.

Can't. Yeah, well, that was one of the main things that we observed in my
previous work, I worked at an organization called Access. Now, I used to be
the global director of policy there, and we were able to see how this issues
evolve around the world, right? And it called my attention in Argentina that
there were quite specific characteristics that I haven't seen anywhere else,
right? And for example, one of them is that we found when I did, I also was
working on an independent research on the Argentinian elections. We haven't
found any specific like clear evidence or at least an initial evidence of foreign
intervention. Most of what happened with the online social movement that
brought me lay to power was quite organic. We didn't see much of a fake
accounts or troll centers. We didn't see much of foreign intervention, which
was very interesting, you know, in the sense of that, as you mentioned before,
Argentina is a country that has been divided, but for a while now, you know, it
started with kitchenerism couple of a couple of presidential periods ago, you
know, and this kind of like internal political division was really strong from the
beginning. So when the one of the candidates appears and makes a proposal
that goes against the status quo, there's another like half of the country is
ready to engage. Yeah, and there's a need to engage.

There's something that we detected, which is very clearly a resonance of


certain kinds of messages and words and ways of communicating online that
resonated with the people. So I would say that the main aspect of the
campaign from Malay is the sentiment of anti politics. People are not only
disenfranchised or disillusion with politics, they are offended with politics and
with politicians. You know, it's a personal thing. It's a really a reaction
movement. And these people who don't talk like politicians, don't look like
politicians. You have seen his, you know, his looks or the way he conducts
himself and so on. They are really attractive for the kind of people and
especially younger people who, as was mentioned before by Mrs. Green, you
know, they have a different way of understanding information. And this idea
about moving from institutions to people as the source of authority is
something that is really resonating with the population. And it's easy to
understand. For example, in Argentina, our military dictatorship lasted until
the 80s, which in internet times is the Middle Ages, but in historic times is
yesterday, right? So this idea of not trusting institutions, the government
being a potential, you know, source of depression and violence, and also at
the same time, this idea of like institutions that are really young, that haven't
had the time to really, you know, get a good basis in our society, together
with corruption and other kind of things. It's a terrible mix that it's ready. It's
like the fertile ground for any of this, you know, populist leaders to just appear
and have a lot of following. So I would say that was one of the key points. Of
course, the lack of regulation for pro for platforms is a problem. Countries like
Argentina are second class citizen, second class users for some platform. And
that's.

What I also wanted to talk about is how much agency do you have? You know,
it's great to hear all the companies talk about the new standards, but let's
face it, if they voluntarily started doing content provenance, they would set
the standard for everybody. So, you know, sitting around saying, I'm waiting
for regulation, you can also model with best practices. And then I'm thinking
precisely, I mean, we know from all the reporting that's been done, including
by many of the people in the room, that your countries, you know, have less
moderation. They have nobody to call their, you know, minority languages
aren't properly, you know, moderated or looked after. So I'm wondering your
perspective on, and I obviously Dominic will talk to us about the Digital
Services Act. But are you gonna sit back and wait for the big companies to
change their policies to start doing content authenticity? What can you do on
your own? And I feel like, you know, Brazil has really been leading the way for
Latin America in terms of regulation, but other countries haven't been. And
I'm curious to know whether, you know, regulation comes up in Taiwan as
well, cuz it's not something I know that much about. Do you wanna go first
and then we'll hear from Ethan and then we'll talk about DSA or.

So when we talk about regulation, actually Taiwan ever had a fair case try to
regulate the platform. But being fed, they say people, there are information,
many fish say even go against freedom or speech.

Gotcha. So it's like the US.

Yes. So each of Taiwan Alley, we just recently publish the information


manipulation of about TikTok. Maybe you can go to our website in Vadameda
CC, look into that. And that narrative is pretty similar what we happen in
Taiwan before. Could.

You tell us like what regulation had been considered and what were the forces
that defeated it.

So I would say the, if we go to the direction of fake checking and content


moderation, that direction will have a lot of challenge because people will say
not against the freedom of speech and also fake, check what is fake. Then
that would be a lot of debate. So, so in Taiwan, instead of we're so now we're
instead we're talking about fake checking all content moderation. We're
talking about how we can disclose the information manipulation.

Right? Of course. So I'm being given a the sign that we only have five
minutes. Okay, but this is of course what's happened in the US, right? We
were told you can't regulate, but we can at least do media literacy and fact
checking. Then it turned out that even teaching media literacy was
controversial, and all the researchers were doing the tracking are all now
getting subpoenaed. So that just like in the 1930s when Columbia was also
pioneering where the space is getting pushed. So that's really interesting.
Definitely gonna go to your website as soon as this is over, have your any
conversation about regulation, then we're gonna finish optimistically DSA.

Very quickly. I think that another untapped resource in Brazil is a great


example. Another anti untapped resource is the internet American system for
human rights. The freedom of expression standards contained there are
widely accepted across Latin America and across America in general. It's a
very good mix between a more regulatory strongly oriented stance from the
European side and more allowing, let's say, First Amendment standards in the
US so I think that there's an interesting middle way to work. There is judges
prudence from the Interamerican Court on Human Rights, for example, on
indirect means of affecting freedom of expression. One of them, for example,
could be the unduly interference, you know, that some of the, in external
actors or sometimes social media companies themselves do on the discourse
of people. So there's a lot to grow there. And of course, there's a lot to do in
terms of electoral regulation, modernization of the bureaucracy of the
electoral commissions, giving more power, more agency to them. Also not, for
example, was stop very quickly by the electoral authorities.

Interestingly, he's been banned from participating for really much longer than.

And this is all electoral regulation. It's not.

Dominici, get the last word. Examples. I know we're running. Out of time.
Okay, DSA, how do we feel? Is it helping?

So the DSA targets illegal on online speech. And I think this is a very powerful
legislation in a way that it doesn't target disinformation because there you're
very, on a very standard risk.

Freedom coming up with a plan. But when.

It comes to illegal speech, I think that it is making progress because there are
actually requirements for the platforms to issue regular reporting, which is
helping us end per country basis. So this is important because in languages
like Dutch, Slavac, Czech, Hungarian, you actually have to see what has been
done because we didn't have this information before. So in this sense, it's
really good. And when do you think.

It will start to kick in? Cuz I know that the different countries are still staffing
up.

It has started already.

No, I know, but when will we all notice it?

Oh, well.

Okay. So there are already reports out. So you can already check those out.
So, so if you do a bit of research, you will notice it. But in terms of on the
platforms, for example, you can already like report illegal content. What I'm
worried about is the platforms that are not cooperative. So if there are so
many, so much exchange of information between Facebook, Microsoft,
Google, that's amazing. But then what about Telegram, for example, right?
Which is the source of extremism and also progression propaganda and all the
Mline content.

Very much so. And there's been so much interesting stuff. Anyway, we could
go on all day, but I certainly don't wanna get in the way of the next panel,
which is gonna be really interesting. So thank you very much. Yeah, and Ethan
and Dominica, and hopefully.

We'll talk. Okay.

Please welcome back to the stage Secretary Hillary Rodham Clinton. And
joining us virtually, IGP Carnegie Distinguished Fellow Eric Schmidt. Rick.

First, we are so delighted to have Eric Schmidt with us, especially because he
is, as you just heard, one of our Carnegie Distinguished Fellows at the Institute
of Global Politics, and he has been meeting with students and talking to
faculty about a lot of these AI issues that we have surfaced during our panels
today. And of course, he wrote a very important book with the late Doctor
Henry Kissinger about artificial intelligence. So we're ending our afternoon
with Eric and trying to see if we can pull together some of the strains of
thinking and challenges and ideas that we've heard. So Eric, thank you for
joining us. You look like you're in a very comfortable but snowy place. And I
wanted to start by asking you, what are you most worried about with respect
to AI in the 2024 election cycle.

Well, first, Madam Secretary, thank you for inviting me to participate in all the
Columbia activities. I'm at a tech conference and AI conference in snowy
Montana, which is why I'm not there. If you look at misinformation, we now
understand extremely well that virality, emotion, and particularly powerful
videos drive voting behavior, human behavior, moods, everything. And the
current social media companies are weaponizing that because they respond
not to the content, but rather to the emotion because they know the things
that are viral are outrageous, right? You know, cli crazy claims get much more
spread. It's just a human thing. So my concern goes something like this. The
tools to build really terrible misinformation are available today globally. Most
voters will encounter them through social media. So the question is, what are
the social media companies doing to make sure that what they are promoting,
if you will, is legitimate under some set of assumptions.

You know, I think that you did an article in the MIT Technology Review fairly
recently, maybe at the end of last year, and you put forth a 6 point plan for
fighting election misinformation and disinformation. I wanna mention both
because they are distinct. What were your recommendations in that article to
share with our audience in the room and online, Eric? And what are the most
urgent actions that tech companies, particularly, as you say, the social media
platforms could and should take before the 2024 elections.

Well, first, I don't need to tell you about misinformation because you have
been a victim of that and in a really evil way by the Russians. When I look at
the social media platforms, here is the blunt fact. If you have a large
audience, people who want to manipulate your audience will find it and they'll
start doing their thing. And they'll do it for political reasons, for economic
reasons, or they're simply needless. There are people who just wanna take
down powerful figures cuz they don't like authority and they'll spend a lot of
time doing it.

So you have to have some principles. One is you have to know who's on the
platform. And in the same sense that if you have an Uber driver, you don't
necessarily know that Uber driver's name and details, but you could be quite
sure that Uber has checked them out because of all the various problems they
had in the past. So you trust Uber will deliver you a driver that is at least a
legitimate driver, right? That's sort of the way to think about it. The platform
needs to know, even if it doesn't tell you who they're, that they're real human
beings. Another thing you have to know is where did it come from? And we
can technologically put watermarks. The technical term is called
technography, where you use an encryption technique and you Mark where
the content came from. So you know roughly how it entered your system. You
also need to know how the algorithms work. We also think it's very important
that you work on age gating, so you don't have people below 16. And those
are relatively sensible ways of taking the worst parts of it out.

I think one of things that's happened since I wrote that article is if you look at
the success of Reddit and their IPO, what they did, they were reluctant, like
everybody else in my industry, they were reluctant to do anything. They
brought in a new CEO who shut down entire subreddits of hate speech and it
improve the overall discourse. So the lesson I've Learned is if you have a large
audience, you have to be an active manager of people who are trying to
distort what you as the leader are trying to do.

That Reddit examples are very good one because you know, I don't have
anything like the experience you do, but just as an observer, it seems to me
that there's been a reluctance on the part of some of the platforms to actually
know. It's kind of like they want deniability. Don't, I don't wanna look too close
cuz I don't really wanna know. And then I can tell people I didn't know and
maybe I won't be held accountable. But I, I actually, I think there's a huge
market for having more trust in the platforms because they are taking off, you
know, certain forms of content that are dangerous in however you define that.
And your recommendations in your article focus mostly on the role of content
distributors. So maybe go a little bit further, Eric, and explaining to us like
what should we think about? And maybe more importantly, what should we
expect from AI content creators and from social media platforms that are
either utilizing AI themselves or being the platforms for the use of generative
AI? How do we think about protecting our elections? And does it matter
whether it's a social media platform, a big AI company, or even open source
developers? Is there some way to distinguish that?

Well, it's sort of a mess as the previous panel discussed. And the reason it's a
mess is there are many different ways in which information gets out. So if you
go through the responsibility, the legitimate players, the offering tools and so
forth, all have a responsibility to Mark where the content came from and to
Mark that it's synthetically generated. That seems kind of obvious. In other
words, we started with this and then we made it into that. And there are all
sorts of corner cases like I touched up the photo, but you should record that it
was so you know that it's an altered photo. It doesn't mean an evil way, but
that's an example.

The real problem here has to do with a confusion over free speech. So I'll say
my personal view, which is I'm in favor of free speech, including hate speech
that is done by humans. And then we can say to that human, you're a hateful
person. And we can criticize them and they can listen to us, and then we
hopefully correct them. That is my personal view. What I am not in favor is of
free speech for computers. And the confusion here is you get some , right,
who is just literally crazy who's spewing all this stuff out, who we can largely
ignore, but the algorithm then boost them. So there is absolutely liability on
the social media platform's responsibility for what they're doing. And
unfortunately, although I agree with what you said, the trust and safety
groups in some companies are being US made smaller and or being
eliminated. I believe at the end of the day, these systems are gonna get
regulated and pretty hard. And the reason is that you have a misalignment of
interests. If I'm the CEO of a social media company, I wanna maximize
revenue. I make more revenue with engagement. I get more engagement with
outrage. So one of the ways to think about is why are we so outraged online?
Well, it's probably cuz the media algorithms are boosting outrageous stuff.
Most people, it is believed, are more in the center. And yet we focus on, and
this is true of both sides, everybody is guilty. So I'm, I think that what will
happen with AI, just answer your questions precisely, is AI will get even better
at making things more persuasive, which is good in general for understanding
and so forth, but is not good for the standpoint of election truthfulness.
Yeah, that is exactly what we've heard this afternoon is that, you know, the
sort of authoritativeness and the authenticity issues are going to get more
and more difficult to discern. And then it'll be a more effective message. And
you know, I was struck by one of your recommendations, which is kind of like,
it's a recommendation that could only be made at this point in human history,
and that is to use more real human beings to help add.

It's almost kind of absurd that we're sitting around talking about, well, maybe
we can ask human beings to help human beings figure out what is or isn't,
you know, truthful. How do we incentivize tech companies to actually use
human beings? And how do we avoid the exploitation of human beings?
Because there's been, you know, some pretty, you know, troubling disclosures
about the, you know, the sort of sweatshops of human beings in, you know,
certain countries in the global south who are being, you know, driven to make
these decisions. And it can be quite, you know, quite overwhelming. So when
you've got companies, as you just said, gutting trust and safety, how do we
get people back to, you know, some kind of system that will make the kind of
judgments that you're talking about?

Well, speaking as a former CEO of a large public company, companies tend to


operate based on fear of being sued, and Section 2,30 is a pretty broad
exemption. And for those in the audience, Section 230 is the the is sort of the
governing body on how content is used. And it's probably time to limit some
of the broad protections that Section 2,30 gave. There are plenty of examples
where someone was shot and killed over some content where the algorithm
enabled this terrible thing to occur. There is some liability. Now we can try to
debate what that is, but if you look at it as a human being, somebody was
harmed and there was a chain of liability, including an evil person, but the
system made it worse. So that's an example of a change.

But I think the truth, if I can just be totally blunt, is ultimately information. And
the information space we live in, you can't ignore it. I used to give the speech
where I would say, you know how we solve this problem? Turn your phone off,
get off the internet, eat dinner with your family and have a normal life.
Unfortunately, my industry, and I'm happy to have been part of that, that
made it impossible for you to escape all of this. As a normal human being,
right, you're exposed to all of this terrible and filth and so forth and so on.
That's gonna ultimately either get fixed by the industry collaboratively or by
Regulation.

A good example here is, let's think about TikTok. Cuz TikTok's very
controversial right now. It is allege that a certain kinds of content is being
spread more than others. We can debate that. TikTok isn't really social media.
TikTok is really television. And when you and I were younger, there was this
huge practice over how to regulate television, and there was something called
an equal time rule. And ultimately, it was a sort of rough balance where we
said fundamentally, it's okay if you present one side as long as you prints
present the other side in a roughly equal way. That's how societies resolve
these information problems. It's going to get worse unless we do something
like that.

Well, I agree with you 100% in both your analysis and your recommendations.
And in the very first panel, we talked about the need to revisit and if not
completely eliminate, certainly dramatically revise Section 230. It's outlived
its usefulness. I mean, there was an idea behind it back and, you know, the
late 90s when this industry was so much in its infancy. But we've Learned a lot
since then, and we've Learned a lot about how we need to have some
accountability, some measuring of liability for the sake of the larger society,
but also to give the direction to the companies.

I mean, these are very smart companies. You know that. You spent many
years at Google. These are very smart companies. They're gonna figure out
how to make money, but let's have them figure out how to make a whole lot
of money without doing quite so much harm. And that partly starts with
dealing with Section 2,30.

You know, when we were talking earlier about, you know, what AI is aiming at,
you know, the panelists were all, you know, very forthcoming and saying,
look, we know there are problems. We're trying to deal with these problems.
You know, we know from even just the public press that a number of AI
companies have invented tools that they've not disclose to the public because
they themselves assess that those tools would make what is a difficult
situation a lot worse.

Is there a role, you think, Eric, for. I know there was the Munich, you know, the
statement negotiated at the Munich Security Conference, which was a start.
But is there more that could be done with a public facing statement, some
kind of agreement by the AI companies and the social media platforms, you
know, to really focus on preventing harm going into the election. Is that
something that's even, you know, feasible?

It should be. The reason I'm skeptical is that there's not agreement among the
political leaders. Of course, you're a world's expert on that, and the
companies. On what definition, what defines harm? I have wandered around
congress for a few years on these ideas, and I'm waiting for the point where
the Republicans and the Democrats are in agreement on, from their local and
individual perspectives that there's harm on both sides. We don't seem to be
quite at that point. This may be because of the nature of how President Trump
works, which is always sort of baffling to me.

But there's something in the water that's causing a non rational conversation.
This is just not possible. So I'm skeptical that's possible. I obviously support
your idea. I think the other thing I would say, and I don't need to scare people,
is that this problem is gonna get much worse over the next few years.

I, I maybe or maybe not by November, but certainly in the next cycle because
of the ability to write program. So I'll give you an example. I was recently
doing a demo. The demo consists of you pick a stereotype, stereotypical
voter. Let's say it's a Hispanic woman with two kids. She's, you know, she has
the following interest. You create a whole interest group around her and she
doesn't exist. It's fake. And then you ask the computer to write a program in
Python to generate 500 variants of her, different sexes, different races, so
forth and so on, different ages and backgrounds who commingle the same
voices. So the ability to have AI, broadly speaking, generate entire
communities of pressure groups that are in fact virtual, it's very hard for the
systems to detect that these people are fake. There are clues and so forth.
But to me, this question about the ability to have computers generate entire
networks of people who don't exist to act for a common cause, which may or
may not one that you and I agree on, but it's probably influenced by their
national security for North Korea or China or Russia or NE or influenced by
some business objective from the tobacco companies or you name it. Yeah, I
worry a lot about that. And I don't think we're ready. These, the, it's possible
just to hammer on this point for the evil person who inevitably is sitting in the
basement of their home and their mother gives them food at the top of the
stairs to do this on their computer in a day. That's how powerful these tools
are.

Okay. Well, let's try to, you know, bring it back a little bit to where we are here
at the university in this, you know, great setting of so many people who have
a lot to contribute and working in partnership with Aspen Digital, which
similarly has a lot of convening and outreach potential. What can universities
do? What can we do in research, particularly on AI? How do we create a kind
of, you know, broad network of partners like we're doing here between IGP
and Aspen Digital? And that we began to try to do what's possible to educate
ourselves, educate our students in combating miss and disinformation with
respect to elections.

So the first thing we need to do is show people how easy it is. And so I would
encourage every university program to have students actually try to figure out
how to do it. Obviously don't actually do it, but it's relatively easy and it's
really quite eye opener with an eye opener for me. And I've done this, you
know, for as long as I've been alive.

The second thing I would do is there are, there's an infrastructure that would
be very helpful. The best design that I'm familiar with is blockchain based and
it's essentially a name. It's a name and origin for every piece of document,
independent of where it showed up. So if everyone knew that this or that this
piece of information showed up here, you could then have provenance and
understand how did it get there, who pushed it, who amplified it. That would
help our security services, our national security people understand is this a
Russian influence campaign or is this something else?

So there's technical things and then there's also educational things. I think
this is only gonna get fixed if there is a bipartisan broad consensus that taking
the edges, the crazy people, and you know what I'm talking about, and it
basically taking them out. I'll give you an example.

There was an analysis in the last, in covid that the No. 1 spreader of
misinformation about Covid online was a doctor in Florida who was like 13% of
all of it. And he was very clever. He had a whole influence campaign of lies
trying to convince you to buy his supplements versus getting a vaccine. That's
just not okay in my view. And the question for me is, why was that allowed by
that particular social media company to exist even after he was pointed out?
So you have a moral framework, you have a legal framework, you have a
technical framework, but ultimately it has to be seen is it's not okay to allow
this evil doctor for profit to allow people to get to, essentially to mislead them
on vaccinations.

Well, just to follow up on that, I mean, I don't at all disagree about what has to
happen if we're gonna end up with some kind of legislation or regulatory
framework from the government. But is there, if they were, so if they were
willing, is there anything that the companies themselves could do, as I say, if
they were willing to, that would lay out some of the guardrails that need to be
considered before we get to the consensus around legislation.

Of course. But the, of course, the answer is yes. But the way this actually
works in a company is you don't get to talk to the engineers, you get to talk to
the lawyers. And the lawyers, as you very well know, are very conservative
and they won't make commitments. So it's gonna require some kind of an
agreement among the leadership of the companies of what's inbounds and
what's out of bounds, right? And getting to that is a process of convening and
conversations. It's also informed by examples. So I would assert, for example,
that every time someone is physically harmed from something, we need to
figure out how we can prevent that. Seems like a reasonable principle if you're
in the digital world design am, right? So working back from those principles is
probably the way to get started. It's not gonna happen unless there's
agreement. Either it's forced on them by the government or there's
agreement by the CEOs.

The best way to achieve that, in my view, is to make a credible and detailed
proposal of where the guardrails are, right, and what it means. What I have
Learned in working on this is you have to have content moderation. When you
have a large community, these groups will show up. They will find you
because their only goal is to find an audience to spread their evil, whatever
the evil is. And I'm not taking sides here.

Well, I think the guardrails proposal is a really good one. And obviously, you
know, we hear at IGP, Aspen Digital, the companies who are here, others, the
researchers who are here. I mean, you know, maybe people should take a run
at that. I mean, I'm, you know, I'm not naive. I know how difficult it is. But I
think this is a problem we all recognize. It's not gonna get better if we just
keep wringing our hands and fiddling on the margins, we have to try
something different. So.

Let me just be an action. Eh? You know, I've sat through all these trust and
safety discussions for a long time, and these are very thoughtful analysis.
They're not producing solutions in their analysis that are implementable by
the companies in a coherent way. So here's my proposal. Identify the people,
understand the provenance of the data, publish your algorithms, be held as a
legal matter that your algorithms are what you said they're right. In other
words, what you said you do, you actually have to do reform Section 2,30,
make sure you don't have kids on so forth, etc. You know, make your
proposals, but make them in a way that are implementable by the team. So
for example, if there's a particular kind of piece of information that you think
should be banned, write an out, write a specification of it well enough that
under your proposal, the computer company can stop them. Yeah, right.
That's where it all fails cuz the engineers are busy doing what they
understand. They're not talking to the lawyers too much. The lawyer's job is
basically prevent anything from happening because they're afraid of liability.
Right. And they don't have leadership from the Congress for the reasons that
you know. And that's why we're stuck.

Well, that's both a summary and a challenge, Eric, and I particularly


appreciate that and especially the work you've been doing to try to, you
know, sort this out and give some guidance so you get the last word from
beautiful, snowy Montana, the last word to kind of, you know, offer that
challenge, you know, ask us to respond, to follow up on what you've outlined
as at least one path forward and try to do it in a, you know, a collaborative
way with the companies and other concerned parties.
I do this as the snowstorm is hitting be in behind me. Look, I think that the
most important thing that we have to understand is this is our generation's
problem. W if this is a he, this is under human control, there's this sort of
belief that none of the stuff can get fixed. But you know from your pioneering
work over some decades here that with enough pressure, you really can bend
the needle. You just have to get people to understand it.

These problems are not insolvable. This is not quantum physics, which is
impossible to understand. It's a relatively straightforward problem of what's
appropriate and what's not. The AI algorithms can be tuned to whatever
society wants.

So my strong message to everyone in Colombia and of course, all the partners


is instead of complaining, which I like to do a great deal, why don't we
collectively write down the solution, organize partner institutions, try to figure
out what the, how to get the people empower to say, okay, I get it, right, that
this is reasonably bipartisan, it makes society better.

There's this old rule about Gresham's Law, which is that bad speech drives out
good speech, which is why the internet is a cesspool. I used to say that. And I
would say, and since I don't like to live in a cesspool, just turned it off. So the
problem you have, and this is especially true for young people, the damage
that's being done online to women and so forth, it's just horrific.

Why would we allow this in modern society? We can fix it. You just have to
have an attitude. I'm trying to fund some open source technology in this area
that are better tools to detect bad stuff. It's gonna take some concerted effort.
And I really appreciate your secretary, your attention on this. Somebody's got
a push.

Well, you and I. Let's keep going, Eric. And I'm so grateful to you. And I hope
you have a great time in the snowstorm and whatever else comes next. But
let's show our appreciation, Derek Schmidt, for being with us. Thank you so
much.

Thank you all.

Well, I think we have a call to action. We just have to get ourselves in the
frame of mind that we're willing to do that. And even writing something down
will help to focus our, you know, mind about what makes sense and what
doesn't make sense. So we're not gonna let you all off the hook. We wanna
come back to you. We wanna have something come out of this. We can talk
about this, meet about this till the cows come home. But in the meantime, as
Eric said and I agree, it'll just get worse and worse and we have to figure out
how we can assert ourselves and maintain the good and try to deal with, you
know, that which is harmful. So please join us in this effort. And as I say, we
will come back to you and seek your guidance and your support. Thank you all
very much.

You might also like