Towards Effective Governance of Foundation Models and Generative AI

Download as pdf or txt
Download as pdf or txt
You are on page 1of 57

MARCH 2024

Towards Effective
Governance of
Foundation Models
and Generative AI
TAKEAWAYS FROM THE FIFTH EDITION
OF THE ATHENS ROUNDTABLE ON
AI AND THE RULE OF LAW
Towards Effective Governance of Foundation
Models and Generative AI: Takeaways from the fifth
edition of The Athens Roundtable on AI and the Rule
of Law

CONTACT: [email protected]

CITE AS: Amanda Leal. “Towards Effective Governance of Foundation Models and Generative AI:
Takeaways from the fifth edition of The Athens Roundtable on AI and the Rule of Law” (The Future
Society, March 2024).

© 2024 by The Future Society. Photography by Emanuel K Miranda. Design by Vilim Pavlović.
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.

| II |
stnetnoC
Executive Summary 1

Remarks by The Future Society 7


Remarks by Yolanda Lannquist 7
Remarks by Nicolas Miailhe 7
GLOBAL COORDINATION & CIVIL SOCIETY INCLUSION 8
Keynote by Vilas Dhar 8
Global Governance: What’s next for international institutions (Panel) 9
Trustworthy AI in the Global South (Fireside chat) 11

The Path to Generative AI Regulation in China (Fireside chat) 13

Remarks by Yoichi Iida 15


Remarks by U.S. Representative Sara Jacobs 16
TRUST & DEMOCRATIC RESILIENCE 17

The impact of Generative AI on Elections (Panel) 17

Uncovering the Use of Generative AI in Judicial Contexts (Fireside chat) 20

Trends and Challenges for AI Governance in 2024 (Fireside chat) 22

Remarks by Cédric Wachholz 24

SAFETY & SECURITY 25

Keynote by Yoshua Bengio 25

Keynote by U.S. Representative Anna Eshoo 26

Managing Safety and Security of Foundation Models (Panel) 27

DHS Priorities for AI Governance (Fireside chat) 30


Navigating AI Deployment Responsibly: Open-Source, Fully-closed, and the
32
_:_Gradient in Between (Roundtable dialogue)
Remarks by Audrey Plonk 34

MEASUREMENT & STANDARDS 35

Keynote by Dr. Erwin Gianchandani 35

Decoding AI: Challenges in Classification, Measurement, and Evaluation (Panel) 36

Remarks by John C. Havens 40

Remarks by Margot Skarpeteig 41

REGULATION & ENFORCEMENT 42

Keynote by U.S. Senator Richard Blumenthal 42

Keynote by U.S. Senator Brian Schatz 43

Keynote by U.S. Senator Amy Klobuchar 44

Regulating AI across its value chain (Fireside chat) 45

Coordinated approaches for AI governance (Fireside chat) 47

| III |
Remarks by Co-hosts 50
stnetnoC
Remarks by Ambassador Ekaterini Nassika 50
Remarks by Stefanos Vitoratos 51
Remarks by Dr. Ellen M. Granberg 51
Remarks by Dr. Pamela Norris 51
Conclusion 52

| IV |
EXECUTIVE SUMMARY

Executive Summary
The barriers to a rights-based approach to AI the spotlight in intergovernmental forums such as
governance anchored in the rule of law have never the G7 and G20.
been more tangible. Increasing AI capabilities,
geopolitical tension, and market-driven interests cast Five years later, 2023 marked a year in which AI
doubt on our ability to collectively uphold the public governance climbed the agenda of policymakers
interest in the development and governance of AI and decision-makers worldwide. The release of
systems. technologies with increasingly general capabilities
has generated hype and accelerated a
The Athens Roundtable on AI and the Rule of Law is concentration of power in big tech, triggering a
the premier civil society-led multistakeholder forum societal-scale wake-up call. The growing threats to
on AI governance. When the forum was inaugurated democratic processes and human rights
in 2019, AI governance frameworks were incipient. presented by generative AI systems have
The first national strategies were just being prompted calls for regulation. We must
developed, international organizations were collectively demand rigorous standards of safety,
kickstarting ethical guidelines and seeking security, transparency, and oversight to ensure
stakeholder consensus, and AI policies were far from that these systems are developed and deployed
responsibly.

The fifth edition in numbers


This fifth edition of The Athens Roundtable took AI in Law & Society (TRAILS), UNESCO, OECD, World
place in Washington, D.C., on November 30th and Bank, IEEE, Homo Digitalis, the Center for AI and
December 1st, 2023. The event brought together Digital Policy (CAIDP), Paul, Weiss LLP, Arnold &
over 1,150 participants in a two-day dialogue Porter, and the Patrick J. McGovern Foundation—and
focused on coordinating efforts to leverage policy was proudly held under the aegis of the Greek
opportunities and co-design actionable solutions. Embassy to the United States. The event welcomed
Discussions focused on governance mechanisms for 65 speakers, including policymakers, AI
foundation models and generative AI globally. developers, legal experts, and civil society
Participants were encouraged to generate representatives. Whether on stage, in workshops, in
innovative “institutional solutions”—binding dedicated networking time, or online, the event
regulations, inclusive policy, standards-development gathered an audience of over 200 in-person and
processes, and robust enforcement mechanisms—to 950 online participants, representing over 100
align the development and deployment of AI countries in total. The range of distinguished
systems with the rule of law. speakers, including U.S. Senators and
Congressmembers, Members of the European and
The Roundtable was organized by The Future Tanzanian Parliaments, and renowned AI experts,
Society and co-hosted by esteemed partners—the underscored the Roundtable's commitment to a
Institute for International Science and Technology multifaceted and global dialogue on AI governance.
Policy (IISTP), the NIST-NSF Institute for Trustworthy

| 1 |
EXECUTIVE SUMMARY

1,150+ 100+ 65
ATTENDEES COUNTRIES SPEAKERS
REPRESENTED

In-person
registrants by Civil society
sector 24%
Private sector
22%
Government

Academia

Intergovernmental
8% organization
21%

24%

In-person
registrants by 49% Man
gender
Woman

Prefer not to say

Non-binary
1%
3%

47%

| 2 |
EXECUTIVE SUMMARY

Forward-looking takeaways
The Athens Roundtable informed the public of the initiative stemming from the UN's High-level Advisory
latest developments in AI legislation, regulation, Body on AI, and the evidence-based work of the
standards, and soft governance mechanisms to set OECD's AI Policy Observatory.
appropriate safeguards around foundation models
and generative AI. In this context, discussions With speakers and participants from over 100
spanned a broad range of themes, including security countries, the ideas and arguments presented
vulnerabilities of frontier AI models, policy reflected the viewpoints from a broad range of
considerations for open-source AI systems, cultural, political, and socioeconomic backgrounds.
geopolitical developments, risks of regulatory Moving forward, The Athens Roundtable
capture by industry, threats to information maintains one key commitment: To reexamine our
ecosystems, and strategies to mitigate the impact of current practices and assumptions, welcoming
AI on democratic processes. input and feedback from broad audiences, with
particular attention paid to engaging
Discussions probed into national efforts to advance underrepresented communities.
binding regulation, such as U.S. federal legislative
efforts, the next steps for federal agencies based on Below, we present key recommendations that
the U.S. Executive Order 14110 on Safe, Secure, and emerged from discussions. These recommendations
Trustworthy Development and Use of Artificial reflect The Athens Roundtable’s mission of
Intelligence, the European Union’s AI Act, and advancing responsible AI governance through a
China's generative AI regulation. Dialogues also harmonized framework encompassing legal
covered intergovernmental efforts, including the compliance and enforcement across jurisdictions.
impact of the G7 Hiroshima AI Process on corporate The report that follows presents a session-by-
governance, the reach of UNESCO's AI Ethics session summary for a more detailed context of
Recommendation (and subsequent implementation discussions.
efforts), the potential of a global AI governance

Key recommendations emerging from discussions


1. ADOPT COMPREHENSIVE HORIZONTAL AND VERTICAL REGULATIONS:
It is crucial that countries adopt legally binding requirements in the form of regulation to effectively
shape the behavior of AI developers and deployers towards the public interest. Self- and soft-
governance have not realized their promises regarding responsible AI and safety, especially when it
comes to foundation models. Sector-specific and umbrella regulations should be adopted in a
complementary manner across jurisdictions to fill the existing gap in AI governance. This approach
allows for robust governance across the entire AI value chain, from design and development to
monitoring, including for general-purpose foundation models that do not fit in any particular sector and
may not be covered by current or future sectoral regulations.

REGULATION & ENFORCEMENT

| 3 |
EXECUTIVE SUMMARY

2. STRENGTHEN THE RESILIENCE OF DEMOCRATIC INSTITUTIONS:


There is an urgent need to build resilience in democratic institutions against disruptions from
technological developments, notably of advanced general-purpose AI systems. Key elements in
building resilience are: capacity-building, in the form of employee training and talent attraction and
retention, across government institutions; institutional innovation to bring public sector structures and
processes up to date; enforcement authority spanning oversight of the development and deployment
of AI systems; and effective public participation. The latter is crucial to ensure that state institutions
remain democratic, maintain citizens’ trust, and act in the public interest.

TRUST & DEMOCRATIC RESILIENCE

3. ENHANCE COORDINATION AMONG CIVIL SOCIETY ORGANIZATIONS (CSOS) TO ADVANCE


RESPONSIBLE AI POLICIES:
In a policy environment with heavy industry lobbying and many conflicting viewpoints, it will be crucial
for CSOs to coordinate efforts in order to amplify promising policy recommendations. Key to this
coordination will be ensuring that CSOs involved are demographically, culturally, and politically
representative of the population at large, and that they consistently listen to the voices of the
communities most impacted by emerging technologies.

GLOBAL COORDINATION & CIVIL SOCIETY INCLUSION

4. INVEST IN THE DEVELOPMENT OF METHODS TO MEASURE AND EVALUATE FOUNDATION


MODELS’ CAPABILITIES, RISKS, AND IMPACTS:
Measurement and evaluation methods play an indispensable role in understanding and monitoring
technological capabilities, establishing safeguards to protect fundamental rights, and mitigating large-
scale risks to society. However, current methods remain imperfect and will require persistent
development in the years to come. Governments should invest in multi-disciplinary efforts to develop
measurement and evaluation methods, such as benchmarks, capability evaluations, red-teaming tests,
auditing techniques, risk assessments, and impact assessments.

MEASUREMENT & STANDARDS

5. INCLUDE GLOBAL MAJORITY REPRESENTATION AND IMPACTED STAKEHOLDERS IN


STANDARD-SETTING INITIATIVES:
Many standard-setting initiatives still lack input from civil society organizations that represent impacted
communities. Policymakers and leaders of such initiatives must strive to understand and address
structural factors that have led to the under-representation or lack of participation by certain groups in
international standard-setting efforts. Potential mechanisms to promote participation include
remunerating underrepresented groups and restructuring internal processes to tangibly engage them,
rather than provide mere formal representation.

GLOBAL COORDINATION & CIVIL SOCIETY INCLUSION

| 4 |
EXECUTIVE SUMMARY

6. DEVELOP AND ADOPT LIABILITY FRAMEWORKS FOR FOUNDATION MODELS AND


GENERATIVE AI:
Liability frameworks must address the complex, evolving AI value chain, so as to disincentivize
potentially harmful behavior and mitigate risks. Companies that make foundation models available to
downstream deployers across a range of domains benefit from a liability gap, where the causal chain
between development choices and any harm caused by the model is currently overlooked. Regulation
that establishes liability along the AI value chain is crucial to engender accountability and fairly
distribute legal responsibility, avoiding liability being transferred exclusively onto deployers or users of
AI systems.

REGULATION & ENFORCEMENT

7. DEVELOP AND IMPLEMENT A SET OF REGULATORY MECHANISMS TO OPERATIONALIZE


SAFETY BY DESIGN IN FOUNDATION MODELS:
Given the borderless character of the AI value chain, regulatory mechanisms must be interoperable
across jurisdictions. Regulators should invest in regulatory sandbox programs to test and refine
foundation models and corresponding regulatory safeguards before deployment.

SAFETY & SECURITY

8. CREATE A SPECIAL GOVERNANCE REGIME FOR DUAL-USE FOUNDATION MODEL RELEASE:


Decisions regarding the release methods for dual-use foundation models should be scrutinized, as
they pose societal risks. Exhaustive testing before release would be in the public interest for models at
the frontier. Further discussion among stakeholders should identify model release methods that
maximize the benefits of open science and innovation without sacrificing public safety.

SAFETY & SECURITY

Moving forward, The Athens Roundtable


maintains one key commitment: To reexamine
our current practices and assumptions,
welcoming input and feedback from broad
audiences, with particular attention paid to
engaging underrepresented communities.

| 5 |
EXECUTIVE SUMMARY

(5) Include global majority


(3) Enhance coordination among civil
GLOBAL COORDINATION & representation and impacted
society organizations (CSOs) to
CIVIL SOCIETY INCLUSION stakeholders in standard-setting
advance responsible AI policies
initiatives

TRUST & DEMOCRATIC (2) Strengthen the resilience of


RESILIENCE democratic institutions

(4) Invest in the development of


MEASUREMENT & methods to measure and evaluate
STANDARDS foundation models’ capabilities, risks,
and impacts

(7) Develop and implement a set of


(8) Create a special governance
regulatory mechanisms to
SAFETY & SECURITY regime for dual-use foundation model
operationalize safety by design in
release
foundation models

(6) Develop and adopt liability


REGULATION & (1) Adopt comprehensive horizontal
frameworks for foundation models
ENFORCEMENT and vertical regulations
and generative AI

Looking ahead, The Future Society remains committed to facilitating dialogues and collaborations. We aim to
develop institutional innovations that ensure that the trajectory of AI development aligns with fundamental
rights and the rule of law for the benefit of all.

| 6 |
REMARKS BY THE FUTURE SOCIETY

Remarks by The Future Society

REMARKS | Yolanda Lannquist


Yolanda Lannquist emphasized the need for enhanced public and
governmental involvement in AI governance. Critiquing the dominance of
private sector interests in AI development, Lannquist stressed the need for
legislation to implement guardrails that address the safety, security, and
ethical risks presented by AI systems. She highlighted the dangers of
prioritizing market growth over safety through the premature launch of
Yolanda Lannquist advanced AI products. Lannquist also pointed to some of the risks
Director, Global AI Governance associated with open access to model weights, such as the ability to remove
THE FUTURE SOCIETY
any existing guardrails. She stressed the importance of establishing proactive
policy interventions, monitoring, and accountability mechanisms. Lannquist
underscored the urgency of governing foundation models as they become
embedded in consumer applications.

REMARKS | Nicolas Miailhe


Nicolas Miailhe addressed the polarization in AI governance and the need for
society as a whole to begin to grapple with the risks associated with AI, from
immediate concerns to existential threats. Miailhe stressed the importance of
collective action grounded in societal values. Miailhe further pointed to Sam
Altman's temporary ouster from OpenAI as underscoring the conflict between
public safety and profit-driven motives. To this end, Miailhe suggested,
Nicolas Miailhe society should demand legally binding frameworks that prioritize the
Founder public interest in AI development. Miailhe also called for a tiered
THE FUTURE SOCIETY
governance approach toward AI models based on their capabilities, a
balanced examination of open-source AI's benefits and risks, and the need to
apportion liability appropriately along the AI value chain, emphasizing the
responsibility of lawmakers and policymakers to act.

| 7 |
GLOBAL COORDINATION & CIVIL SOCIETY INCLUSION

GLOBAL COORDINATION & CIVIL SOCIETY


INCLUSION

KEYNOTE | Vilas Dhar


Vilas Dhar | President and Trustee, Patrick J. McGovern Foundation

Vilas Dhar, President of the Patrick J. McGovern global majority in AI policy, transcending the
Foundation, illuminated the deep-rooted, traditional focus on the US and the EU.
philosophical nature of AI governance discussions,
acknowledging the significant contributions of Dhar proposed three key priorities:
humanists, philosophers, and ethicists over the
decades. He stressed the importance of balancing 1. Drive public investment to bridge the digital
human dignity, justice, and equity with private sector divide and ensure broadband connectivity for
interests, while also creating avenues for solutions all.
that serve universal human interests. 2. Address the data gap in AI development by
prioritizing the collection and use of diverse data
Dhar underscored the critical need for inclusivity in sets that truly represent global populations,
AI policy-making, pointing out the stark digital especially in areas like health and drug
divide—with 2.6 billion people still offline—and discovery.
the dominance of Global North governments in 3. Leverage policy mechanisms to close the
shaping our collective future. He highlighted the technical capacity gap among the global
ongoing struggle for civil society, particularly those majority. This is paramount to fostering a
representing marginalized communities, to gain a diverse community of socially conscious AI
meaningful voice in the AI conversation. In the practitioners.
context of the U.S., Dhar criticized the
disproportionate focus on big tech narratives and Dhar concluded with optimism, highlighting the
the over-reliance on voluntary self-regulation, which potential for shared values to lead to policy
sidelines civil society’s participation. Dhar also harmonization, ultimately benefiting economic,
emphasized the need to include perspectives of the social, and political outcomes around the world.

| 8 |
GLOBAL COORDINATION & CIVIL SOCIETY INCLUSION

PANEL | Global Governance: What’s next for


international institutions
Amandeep Singh Gill | Secretary Generals’ Envoy for Technology, United Nations
Gabriela Ramos | Assistant-Director General for Social and Human Sciences, UNESCO
Ulrik Vestergaard Knudsen | Deputy Secretary-General, OECD
Gary Marcus | Emeritus Professor of Psychology and Neural Science at NYU and CEO at the Center for the
Advancement of Trustworthy AI
Susan Ariel Aaronson (moderator) | Professor of Intl. Affairs, Director of the Digital Trade and Data Governance
Hub, and co-PI of NIST-NSF TRAILS, The George Washington University

Main Takeaways
ENSURE THE RESILIENCE OF DEMOCRATIC INSTITUTIONS WITH REGULATION
AND CAPACITY-BUILDING:
Given the host of unintended outcomes they may occur through the development and deployment of large
models, it is urgent to strengthen democratic institutions and develop mechanisms to mitigate harms when they
happen. This should be done through increased capacity-building in the public sector and a move towards
binding laws.

IMPLEMENT GLOBAL PRE-DEPLOYMENT SAFETY MECHANISMS:


To mitigate the potential widespread harms of foundation models, speakers recommended international
coordination toward developing and deploying ex-ante evaluations, risk assessment methodologies, human
rights impact assessments, red teaming, AI capability control mechanisms, interoperable AI auditing practices,
and certification ecosystems.

SHAPE AI GOVERNANCE FOR THE PUBLIC INTEREST:


The panel highlighted the urgency to think beyond safety and harm towards a broader notion of public interest,
including AI’s impact on human rights, environmental sustainability, and economic inclusion. Speakers
expressed concern about the growing concentration of power in AI, and stressed the importance of the
involvement of a diverse range of stakeholders in AI governance initiatives.

| 9 |
GLOBAL COORDINATION & CIVIL SOCIETY INCLUSION

Discussion
This panel addressed two questions that rose to the Adding to the multistakeholder challenge, he noted
core of international AI governance discussions in the looming risk of regulatory capture, given the
2023. First, how can we collectively facilitate an high concentration of power in a few corporations.
international AI governance regime? Second, what Finally, Dr. Gill emphasized that transparency in data
safety mechanisms are needed to address the source disclosure remains an underexplored area in
borderless impact of foundation models? AI governance with global implications.

OECD’s Deputy Secretary-General Ulrik Vestergaard UNESCO’s Assistant Director-General for Social and
Knudsen opened the discussion by classifying the Human Sciences, Gabriela Ramos, underlined the
rise of generative AI as a watershed moment, calling necessity of ex-ante AI assessments and
for updates on international institutions’ work. He adherence to human rights standards. She
detailed the OECD's plans to review its 2019 described UNESCO's collaboration in creating an
principles in light of evolving AI capabilities and AI ethics observatory with civil society organizations
the OECD’s ongoing efforts to collect evidence of AI to ramp up analytical efforts to inform policymaking.
impacts and strengthen its AI experts community. In addition to analytical work, UNESCO has been
Knudsen acknowledged the OECD’s inherent focus actively working with national governments to apply
on only a subset of countries, but called for broader readiness assessments and build public sector
collaboration in AI governance to facilitate the capacity to implement the Recommendation on the
convergence of approaches. Ethics of Artificial Intelligence, adopted by 194
countries. Reflecting on growing concerns about
UN Tech Envoy Amandeep Gill stressed the need generative AI and foundation models, Ramos
for an inclusive network of institutionalized stressed the importance of legal responsibility and
responses to AI, in which different international liability frameworks for AI developers, highlighting
institutions and blocks of countries share knowledge the challenge of implementing these globally.
and coordinate harmonized approaches. He
highlighted the UN Tech Envoy’s mandate, whose Gary Marcus dissected the shift in the public
advisory body is developing a comprehensive risk discourse from “trust” to the “safety” of AI systems,
assessment framework encompassing short- to pointing out how events such as ChatGPT's launch
long-term risks. Dr. Gill is also focusing on contributed to growing safety-oriented concerns. He
democratizing AI opportunities aligned with criticized the rapid commercialization of AI before
sustainable development goals (SDGs). Key robust safety guardrails were in place, exemplified
challenges include updating practices and by various instances of dangerous outputs by AI
responses promptly in a fast-evolving AI landscape, systems, as was the case with Microsoft’s Sydney.
improving multistakeholder participation, and Finally, Dr. Marcus called for work to increase
coordinating responses across industry, civil society, reproducibility in the development of AI systems.
and governments at the national and international
levels.

Reflecting on growing concerns about generative


AI and foundation models, Ramos stressed the
importance of legal responsibility and liability
frameworks for AI developers...

| 10 |
GLOBAL COORDINATION & CIVIL SOCIETY INCLUSION

FIRESIDE CHAT | Trustworthy AI in the Global South


Neema Lugangira | Member of Parliament Tanzania, and Chair, African Parliamentary Network on
Internet Governance
Yolanda Botti-Lodovico (moderator) | Policy and Advocacy Lead, Patrick J. McGovern Foundation

Main Takeaways
LAWMAKERS HAVE A KEY ROLE IN INTERNATIONAL LEGAL AND REGULATORY HARMONIZATION:
Lawmakers can advance AI governance coordination in their respective jurisdictions by enacting laws, voting
on and proposing investment priorities, and raising awareness within government, among colleagues, and with
constituents. Lawmakers, as a crucial stakeholder group, must be engaged in discussions and convenings on
AI governance.

DEVELOP LIABILITY FRAMEWORKS AND REGULATIONS THAT ADDRESS THE GLOBAL CHARACTER OF
THE AI SUPPLY CHAIN AND ENABLE REDRESS FROM IMPACTED PEOPLE:
Lawmakers, especially those of advanced economies, must not overlook the impact on the Global South
through companies’ supply chains. Countries should enact liability frameworks that hold actors accountable and
strengthen corporate responsibility beyond their borders.

EXPAND REPRESENTATION AND PARTICIPATION OF THE GLOBAL MAJORITY IN INTERNATIONAL AI


GOVERNANCE DECISION-MAKING PROCESSES:
Recent high-level convenings on AI governance have had limited geographic representation, which puts at risk
any outcome with global ambitions. Global AI governance decision-making processes should strive to include
global majority representatives across different stakeholder groups: governments, independent academic
experts, civil society representatives, and industry representatives.

| 11 |
GLOBAL COORDINATION & CIVIL SOCIETY INCLUSION

Discussion
In an insightful conversation on trustworthy AI, While Global South countries are economically,
Honorable Member of the Tanzanian Parliament politically, and socially affected by the
Neema Lugangira and policy expert Yolanda Botto- development of AI systems, they have often not
Ludovico highlighted the necessity for global been meaningfully represented in international AI
inclusivity in decision-making, knowledge sharing, governance convenings such as the UK AI Safety
and capacity building, as well as robust regulatory Summit. MP Lugangira emphasized the urgency of
frameworks that prevent exploitative dynamics in the transforming African countries from mere consumer
Global South. The discussion underscored the markets to respected and active participants and
importance of equitable AI development, developers in the AI landscape. A positive step
international collaboration, and accountability in toward that direction is the African Union’s ongoing
AI governance, emphasizing that the benefits of AI collaboration with the OECD AI in writing a
should be democratized globally in a secure, safe, continental AI Strategy. This strategy will be key in
and ethical manner. leveraging AI to address critical challenges like food
insecurity on the African continent.
Drawing from her experience in Tanzania and
internationally in the Inter-Parliamentary Union (IPU) Drawing attention to the global majority’s crucial role
and the African Parliamentary Network on Internet in AI development, MP Lugangira highlighted that
Governance, MP Lugangira pointed out the crucial data is the backbone of foundation models. She
need to equip lawmakers with the knowledge and raised concerns about companies exploiting data
tools to contribute to AI governance effectively. She from African nations without compensation and
advocated for increased participation of stressed the importance of enacting laws across
parliamentarians in AI governance discussions jurisdictions allocating liability throughout the
worldwide and underscored the urgent need for global AI supply chain. Countries in the Global
global legislative attention on AI's societal North developing AI regulations should set a
implications. In her role at the Inter-Parliamentary standard of behavior that allows individuals in the
Union, MP Lugangira is co-sponsoring a draft Global South to hold AI companies accountable for
resolution for the IPU’s first general assembly of harms reproduced beyond companies’ headquarters
2024, focusing on the impact of AI on democracy jurisdictions.
and the rule of law. If approved, the resolution can
influence national legislative discussions across the
193 countries represented at the IPU.

Hon. MP Lugangira pointed out the crucial need


to equip lawmakers with the knowledge and tools
to contribute to AI governance effectively. She
advocated for increased participation of
parliamentarians in AI governance discussions
worldwide and underscored the urgent need for
global legislative attention on AI's societal
implications
| 12 |
GLOBAL COORDINATION & CIVIL SOCIETY INCLUSION

FIRESIDE CHAT | The Path to Generative AI


Regulation in China
Yi Zeng | Professor, Director; Brain-inspired Cognitive Intelligence Lab and International Research Center for AI
Ethics and Governance, Institute of Automation, Chinese Academy of Sciences; Founding Director, AI for SDGs
Cooperation Network; Founding Director, Center for Longterm AI
Samuel Curtis (moderator) | Senior Associate, The Future Society

Main Takeaways
EMBRACE A COMPLEMENTARY REGULATORY APPROACH:
Countries should learn from each other, with a focus on integrating both vertical approaches, as China has
applied in some sectors, and horizontal approaches, akin to the EU's for comprehensive AI governance.

INCREASE THE PARTICIPATION OF INDEPENDENT ACADEMIC EXPERTS:


Prof. Zeng advocated for increasing academic experts’ participation in high-level advisory groups, like the
United Nations High-Level Advisory Body on AI, to ensure a diversity of perspectives and a balanced approach
to AI governance.

BROADEN THE INTERNATIONAL DIALOGUE TO INCLUDE DIVERGENT VIEWPOINTS:


Global coordination efforts must move beyond discussions among “like-minded” countries to more inclusive
and substance-oriented dialogues. It is urgent to bridge diverse perspectives and foster a collaborative
international environment for AI development and regulation.

| 13 |
GLOBAL COORDINATION & CIVIL SOCIETY INCLUSION

Discussion
In a conversation with Professor Yi Zeng, a safety, called the Ditchley Declaration—Prof. Zeng
renowned expert in AI ethics and governance, TFS’s emphasized the necessity of inclusive dialogue in
Samuel Curtis inquired about Prof. Zeng’s views on global coordination efforts. He stressed the
China's unique approach to AI governance and its importance of achieving a unified, global
contributions to the global AI regulatory landscape. understanding of the risks presented by AI
development and that this will require engaging with
During the discussion, Yi Zeng emphasized the nations across the geopolitical spectrum. The
symbiotic nature of various regulatory participation of China in the summit was an example
approaches, contrasting the European Union's of this inclusive approach, ensuring that discussions
broad, horizontal AI Act with some of China's more on AI safety encompass a diversity of cultural and
targeted, vertical regulations focusing on specific AI political viewpoints.
applications, such as recommendation systems and
generative AI. Prof. Zeng highlighted the benefits of In addition to these policy discussions, Prof. Zeng
China's approach, particularly its specificity in underscored the crucial role of academia in shaping
addressing AI challenges. He also suggested that, AI governance. He highlighted the unique
conversely, China could learn from the EU’s broader, contributions of independent academic experts in
horizontal framework. providing balanced, interdisciplinary perspectives on
AI's long-term risks. Prof. Zeng advocated for
In his analysis of international AI safety commitments independent academic expertise to guide AI
—including the UK AI Safety Summit, the Bletchley development beyond national competition,
Declaration, and the joint statement signed by world- focusing on collaborative problem-solving.
leading academics highlighting the importance of AI

Prof. Zeng emphasized the necessity of inclusive


dialogue in global coordination efforts. He
stressed the importance of achieving a unified,
global understanding of the risks presented by AI
development and that this will require engaging
with nations across the geopolitical spectrum.

| 14 |
GLOBAL COORDINATION & CIVIL SOCIETY INCLUSION

REMARKS | Yoichi Iida


Yoichi Iida | Deputy Director General, Ministry of International Affairs and Communication, Japan

Yoichi Iida joined The Athens Roundtable to generative AI and foundation models; guiding
celebrate the completion of the first phase of the principles for AI actors across the value chain; a set
Hiroshima AI Process, with the guiding principles and of measures and actions targeted at advanced AI
the G7 code of conduct. He shared a reflection on (generative AI and foundation models) developers,
his experience as a representative of Japan and including the code of conduct; and a set of projects
chair of the Hiroshima AI Process Working Group, that will explore potential solutions to respond to
which took place on December 1st, 2023. emerging risks and challenges of those
technologies. The projects aim to tackle foundation
Mr. Iida commented on the Hiroshima AI Process models’ lack of transparency and the spread of AI-
report, which includes a comprehensive framework enabled disinformation.
that promotes safe, secure, and trustworthy
generative AI. Notably, it was developed with the Mr. Iida commended the Hiroshima AI Process
intention of being adopted beyond G7 member Working Group’s strong commitment to collaborating
countries. The framework is comprised of four with a variety of stakeholders across the world and
elements: The OECD report covering the potential other multilateral organizations to operationalize the
risks, challenges, and opportunities brought by framework.

... the Hiroshima AI Process report ... was


developed with the intention to be adopted
beyond G7 member countries. The framework is
comprised of four elements: The OECD report
covering the potential risks, challenges, and
opportunities brought by generative AI and
foundation models; guiding principles for AI
actors across the value chain; a set of measures
and actions targeted at advanced AI developers,
including the code of conduct; and a set of
projects that will explore potential solutions to
respond to emerging risks and challenges of
those technologies.

| 15 |
GLOBAL COORDINATION & CIVIL SOCIETY INCLUSION

REMARKS | U.S. Representative Sara Jacobs


Sara Jacobs | United States Representative

U.S. Representative Jacobs highlighted the need for definition of “AI safety,” encompassing the entire
globally coordinated AI governance, stressing the spectrum of AI risks. It is particularly important to
need to address AI's impact on the Global South, address bias and ongoing harms incurred by
and advocated for inclusive, multilateral marginalized communities for increased safety.
engagements.
Finally, U.S. Representative Jacobs echoed calls for
U.S. Representative Jacobs underscored the robust oversight and regulation. She encouraged
importance of incorporating diverse voices, leading nations, especially the US, to adopt new
especially Civil Society Organizations (CSOs) and legislation and develop new resources for AI
governments from the Global South, into AI policy oversight. She stressed the strategic role new
discussions. She also urged for a comprehensive institutions like the U.S. AI Safety Institute can play
approach to AI safety, encouraging the AI in fostering adaptive and effective AI governance.
governance community to develop a broad

| 16 |
TRUST & DEMOCRATIC RESILIENCE

TRUST & DEMOCRATIC RESILIENCE

PANEL | The impact of Generative AI on Elections


Dr. Rebekah Tromble | Director, Institute for Data, Democracy & Politics, George Washington University
Caio Machado | Executive Director, Instituto Vero
Marielza Oliveira | Director of the UNESCO Communications and Information Sector's Division for Digital
Inclusion, Policies, and Transformation
Paul Nemitz | Principal Adviser on the Digital Transition, European Commission
Merve Hickok (moderator) | President, Center for AI and Digital Policy

Main Takeaways
REGULATE THE FINANCING OF DIGITAL CAMPAIGNS AND THE USE OF MICROTARGETING FOR
ELECTORAL PURPOSES:
The unregulated use of generative AI in electoral campaigns is extremely harmful to democracy. Governments
must ensure that electoral outcomes don’t hinge more on financial resources and technical capabilities than on
democratic discourse and voter engagement.

ENSURE THE SAFETY OF JOURNALISTS AND PROTECT AUTHENTIC CONTENT:


Emphasizing the need to protect professional and evidence-based journalism, the panelists called for stringent
redressing measures against attacks on journalists. Policymakers must consider coordinating toward global
norms to manage the influx of AI-generated content and misinformation. Other urgent measures to preserve
the digital ecosystem’s integrity include enforcing standardized guidelines for platform governance, providing
incentives for authentic content creation, and mechanisms to label AI-generated or AI-altered content, such as
watermarks.

ESTABLISH AI GOVERNANCE EXPERT GROUPS FOR ELECTIONS:


Speakers proposed the creation of a group that provides AI governance support at scale and on-demand,
especially for electoral bodies in regions with fragile electoral systems. Independent expert groups can help
enhance the integrity and security of electoral processes worldwide.

| 17 |
TRUST & DEMOCRATIC RESILIENCE

Discussion
In 2024, approximately 3.9 billion people—48% of advertisements and party financing. Reflecting on
the world's population—will participate in general the Digital Services Act and the European context,
elections across 54 countries, including five Nemitz highlighted that jurisdictions that adopt
nuclear-armed states. At this critical political platform and electoral advertising regulations with
juncture, policies must address AI’s potential to policies to combat disinformation would be less
amplify disinformation, heighten cybersecurity likely to have elections swayed by digital targeting
threats, and disrupt the information ecosystem. techniques.

Opening the discussion, Dr. Rebekah Tromble Caio Machado criticized the resistance tech
analyzed the potential impact of generative AI on companies often exhibit towards regulation. The
voter behavior and election dynamics. She absence of institutional mechanisms to diagnose
emphasized the crucial role of AI transparency in the problems and understand the impact of
upcoming 2024 U.S. presidential elections. Dr. technologies creates a trust gap, which may lead
Tromble advocated for data and model disclosure to radical and hasty, non-technical solutions or
requirements and emphasized the need to educate simply inaction. He also pointed out the need to
and inform the public about the potential impact of AI rethink information production, usage, validation, and
on voter behavior and election dynamics. dissemination. Machado shared his experience
combating disinformation in Brazil, where Institute
The discussion followed with an analysis of the Vero collaborated with social media creators to
adequacy of the current European regulatory educate young people and judiciary staff on fact-
landscape in addressing the challenges posed by checking and open-source investigation tools.
generative AI and disinformation. Paul Nemitz Regulations and institutional mechanisms should be
defended the vetting of large AI models by tailored to their respective local contexts while
regulators before public release, similar to the pursuing a global goal: strengthening democratic
safety requirements of other industries such as institutions’ resilience to technological disruption.
automotive and pharmaceuticals. Focusing on
elections, Nemitz emphasized the dangers of In her response, Marielza Oliveira from UNESCO's
electoral outcomes’ greater dependence on digital Division for Digital Inclusion, Policies and
advertising and sophisticated targeting technologies Transformation articulated the organization's
than on the quality of political arguments or approach to balancing freedom of expression with
candidates’ trustworthiness. Nemitz stressed the
urgent need to rethink current models of election

Regulations and institutional mechanisms should


be tailored to their respective local contexts while
pursuing a global goal: strengthening democratic
institutions’ resilience to technological disruption.

| 18 |
TRUST & DEMOCRATIC RESILIENCE

oversight for a democratic information ecosystem, accepted in the economic realm but problematic for
rooted in Article 19 of the International Covenant for electoral integrity.
Civil and Political Rights. She underscored the
threats of the rapid diffusion of harmful AI- Furthermore, society must be equipped with critical
generated content on social media platforms. In thinking skills, media literacy, and access to
the Global South, infrastructure and skills gaps information to properly make sense of the digital
exacerbate disparities in the information ecosystem. Society must demand accountability of
ecosystem. To fight the surge of AI-generated tech corporations, politicians, and governments for
content, Dr. Oliveira emphasized UNESCO's efforts the development and deployment of potentially
in assessing digital ecosystems: Currently, 44 harmful AI systems. Speakers also emphasized the
countries are voluntarily assessing the extent to importance of supporting trustworthy information
which their digital ecosystems are human rights- sources, journalists, and local news in light of AI-
based, open, accessible, and multi-stakeholder-led. generated content.
These assessments help identify and address
systemic shortcomings, such as issues related to Finally, speakers concluded that transparency alone
language barriers, accessibility, and inclusion. is insufficient for preserving democracy.
Governments should adopt positive agendas
Speakers analyzed possible restrictive measures to focused on rebuilding trust, lest environments of
protect electoral integrity. The discussion also systemic distrust undermine the political institutions
touched upon the need to rethink business models themselves.
that significantly impact democracy and the rule
of law. Micro-targeting, for instance, is widely

| 19 |
TRUST & DEMOCRATIC RESILIENCE

FIRESIDE CHAT | Uncovering the Use of Generative AI


in Judicial Contexts
Juan David Gutierrez Rodriguez | Associate Professor, School of Government of Universidad de los Andes
Miriam Stankovich | Principal Digital Policy Specialist, DAI
Linda Bonyo | Founding Director, Africa Law Tech; Founder, Lawyers Hub Kenya
Kimberly H. Kim | Assistant Chief Administrative Law Judge, California Public Utilities Commission
Cédric Wachholz (moderator) | Chief of Section, Digital Innovation and Transformation Section, UNESCO

Main Takeaways
DEVELOP AND DEPLOY FORMAL TRAINING AND GUIDELINES FOR THE USE OF AI IN THE JUDICIARY:
The panel underscored the need for formal training and guidelines on the judicial use of AI, emphasizing that AI
tools must be leveraged in an informed, ethical, and responsible manner. UNESCO has been leading efforts in
this area, with the MOOC on AI and the Rule of Law co-produced with The Future Society, the Toolkit on AI and
the Rule of Law, and the upcoming guidelines for the use of generative AI in judicial contexts. Acknowledging
the diverse impacts of AI across different regions, the discussion highlighted the need for contextual
adaptations of training and guidelines and equitable access to resources, particularly in the Global South.

IMPLEMENT INSTITUTIONAL INNOVATION IN THE JUDICIARY TO LEVERAGE AI TO EXPAND ACCESS


TO JUSTICE:
The conversation pointed towards future-proofing the judiciary against the challenges posed by AI, including
developing adequate mechanisms to manage increased caseloads and reviewing processes to ensure fair
access to justice in an increasingly digital legal landscape.

| 20 |
TRUST & DEMOCRATIC RESILIENCE

Discussion
This fireside chat brought together experts from Linda Bonyo focused on the context-specific impact
different corners of the world to discuss the of technology, highlighting the disparities in AI tool
burgeoning intersection of generative AI and the performance and adoption across different
judiciary. Moderated by Cédric Wachholz, panelists regions, particularly in Kenya and the broader Global
shed light on AI utilization in judicial settings, the South. She pointed out that when procuring cutting-
associated risks, and the pressing need for edge technologies, such governments often must
guidelines and training in this domain. rely upon products developed in the Global North,
emphasizing the issue of vendor lock-in due to the
Professor Juan David Gutierrez Rodriguez initiated lack of viable local alternatives. Bonyo further
the conversation by sharing findings from UNESCO's advocated for equitable access to computing
global survey on generative AI in the judiciary. This resources in the Global South, helping countries
survey, receiving responses from nearly 100 transcend the roles they might otherwise be
countries, indicated a significant gap between confined to—mere consumption and data labeling.
legal operators’ familiarity with AI and its practical
application in professional legal contexts. While Judge Kimberly Kim provided a cautiously pragmatic
most respondents were acquainted with AI, only a perspective on the use of generative AI in the U.S.
fraction used AI tools, such as large language judiciary. She highlighted the judiciary's resistance to
models, for professional purposes. Concerns that technological changes and the need for tools and
were highlighted included data security, reliability of programs to educate legal operators about
information, and potential violations of privacy and generative AI. Preparing the judiciary for the
copyright. Notably, a vast majority of the operational impacts of AI is urgent. Courts might see
respondents had not received formal AI training, an increase in dockets due to AI-related claims and
indicating a dire need for educational initiatives and lawsuits. She furthermore stressed the urgent need
mandatory rules to govern the use of AI in legal for equitable access to justice: costly AI-assisted
settings. legal tools will confer strategic legal advantages
to those with access to them, which will likely
Miriam Stankovich underscored the challenges contribute to a growing digital divide in the justice
posed by generative AI tools in the judiciary. system.
Drawing from her collaboration with UNESCO on the
recently launched Global Toolkit on AI and the Rule Professor Rodriguez concluded the panel with
of Law, she pointed out the tendency of generative insights into upcoming UNESCO recommendations
AI tools to create plausible but not necessarily on generative AI use in judicial contexts. He
accurate outputs, "hallucinate," and amplify bias. emphasized the need to test AI tools before
Stankovich emphasized the urgent need for deployment, particularly in high-risk environments
enhanced regulation, governance, and, critically, like the justice sector, and to assess their impact on
digital literacy among judges to navigate the human rights.
complexities of AI in judicial proceedings.

| 21 |
TRUST & DEMOCRATIC RESILIENCE

FIRESIDE CHAT | Trends and Challenges for AI


Governance in 2024
Keith Sonderling | Commissioner, U.S. Equal Employment Opportunity Commission (EEOC)
Peter Schildkraut | Technology, Media & Telecommunications Industry Team Co-Leader, Arnold & Porter
Tawana Petty | 2023-2025 Just Tech Fellow, Social Science Research Council
Nicolas Moës (moderator) | Executive Director, The Future Society

Main Takeaways
DEVELOP STRATEGIES AND ORGANIZATIONAL STRUCTURES THAT ALLOW FOR CIVIL SOCIETY
ORGANIZATIONS TO UNITE IN ADVOCATING FOR PROMISING AI POLICIES:
Civil society is composed of actors with a wide range of priorities, values, and backgrounds, but they share a
common goal of advancing the public interest. Civil society organizations should collectively identify and
advance promising AI policies to counteract growing corporate lobbying efforts.

STRIVE FOR DIVERSITY IN CORPORATE COMPLIANCE AND RISK MANAGEMENT BOARDS:


Internal boards should be capable of foreseeing a range of risks and recommending appropriate actions. A
greater diversity of perspectives would enable companies to identify different layers of risks and foresee
negative outcomes, from product development to post-deployment maintenance.

FACILITATE THE PARTICIPATION OF IMPACTED COMMUNITIES IN REGULATORY PROCESSES:


Tawana Petty highlighted the power of grassroots movements and local organizations in shaping AI policy,
stressing the importance of including voices often sidelined in the regulatory process. In establishing
appropriate safeguards, regulatory discussions must consult with impacted communities and consider case
studies that elucidate the impact of emerging technologies on the ground.

PRIORITIZE THE EXECUTIVE POWER’S AI CAPACITY-BUILDING:


Government agencies must urgently develop expertise and capabilities to understand, audit, and effectively
regulate AI technologies. Agencies should seek knowledge from independent academic experts to fill the
current skills gap.

| 22 |
TRUST & DEMOCRATIC RESILIENCE

Discussion
This panel centered on the evolving landscape of AI into ongoing litigation that illustrates the judiciary’s
governance in 2024, exploring the balance between role in clarifying the application of existing laws to AI
technological innovation and the need for robust technologies, thus shaping the trajectory of AI
regulatory frameworks. The conversation highlighted regulation. In addition, Schildkraut analyzed the
the importance of diverse stakeholder involvement industry's emerging challenges for compliance and
in shaping AI policy and the challenges of adapting risk management. Notably, companies should
existing regulatory mechanisms to the nuanced ensure their risk- and impact-assessment bodies
demands of AI technologies. are comprised of professionals with technical AI
expertise and professionals with different
Keith Sonderling emphasized the significance of backgrounds and subject matter expertise. Finally,
involving new stakeholders, such as auditors and they should be empowered within the company to
AI developers, in the regulatory space, particularly make decisions to discontinue the development or
in the context of employment and civil rights. He production of dangerous AI models.
highlighted the necessity of integrating these new
perspectives to ensure AI technologies are Centering the main challenge for 2024 on AI policy
developed and deployed without discriminating and representation, Tawana Petty stressed that,
against marginalized groups. Government agencies although the US has advanced in developing AI
such as the EEOC have a crucial role to play, governance frameworks such as the Blueprint for an
regardless of legislative developments and new AI Bill of Rights, there are still missing voices in the
regulations, in enforcing existing laws in cases dialogues and decision-making processes
pertaining to the use of AI and its impact on people, pertaining to AI governance. Petty underscored the
as well as applying rigorous oversight to the potential of people and grassroots movements in
deployment of AI in the sectors they are mandated influencing policy and demanding inclusion when
for. The use of natural language processing tools in civil society groups are marginalized in decision-
hiring assessments, for instance, might discriminate making processes. She advocated for the inclusion
against non-native English speakers or people with of diverse voices—particularly those most
speech impairments, which falls under the purview impacted by AI technologies—in regulatory
of the EEOC. discussions. In order to advance responsible AI
policy, civil society organizations should coordinate
Peter Schildkraut discussed the judiciary's vital role, efforts, leveraging intersectionality and elevating
especially in the U.S., in defining rights and marginalized voices rather than seeking a monolithic
addressing AI-related harms. He provided insights approach.

In order to advance responsible AI policy, civil


society organizations should coordinate efforts,
leveraging intersectionality and elevating
marginalized voices rather than seeking a
monolithic approach.

| 23 |
TRUST & DEMOCRATIC RESILIENCE

REMARKS | Cédric Wachholz


Cédric Wachholz | Chief of Section, Digital Innovation and Transformation Section, UNESCO

Cédric Wachholz shared insights from a recent However, cases in which AI systems are the object
survey conducted among UNESCO's network of of litigation remain markedly complex. Wachholz
35,000 judicial operators from over 100 countries, cited a case in Brazil where the use of “smart
focusing on generative AI and its role in the judiciary. billboards” in the São Paulo metro system to predict
The survey revealed a dramatic increase in AI use riders’ emotions and other attributes was
within legal systems worldwide, raising important challenged.
questions about AI's role in enhancing justice
while upholding human rights and democratic Wachholz also mentioned UNESCO's significant
values. role in training judicial operators and the growing
demand for AI training. He referenced the AI and
While there had initially been an international the Rule of Law MOOC, launched by UNESCO in
convergence towards a multidimensional approach partnership with The Future Society and other
to AI governance focused on the protection of organizations, which has educated over 5,900
human rights, recent industry developments judicial operators from 141 countries. Additionally,
suggested market pressures often prioritize profit UNESCO recently introduced a Global Toolkit on AI
over safety and private interests over public ethical and the Rule of Law for the judiciary.
AI governance.
In conclusion, Wachholz called for collaborative
Meanwhile, governments are grappling with efforts to transform discussions into action, urging
fostering innovation-friendly environments while participants to work together to ensure AI supports
establishing clear, effective AI guardrails. In the rather than undermines justice.
judiciary, AI offers potential benefits in decision-
making, access to justice, and crime prevention.

| 24 |
SAFETY & SECURITY

SAFETY & SECURITY

KEYNOTE | Yoshua Bengio


Yoshua Bengio | Scientific Director, Mila & IVADO, Full Professor, Samsung AI Professor, Université de
Montréal, Canada CIFAR AI Chair

In an inspiring keynote, Professor Yoshua Bengio continue to develop more robust measurement
laid out his perspective on countering safety and and evaluation mechanisms.
security risks of increasingly capable AI systems,
examining how the balance of power is pivotal for Prof. Bengio also emphasized the responsibilities of
the survival of democracies. companies and governments in the AI domain:
Companies should proactively demonstrate the
Prof. Bengio identified two primary technical safety of their AI systems, and governments
challenges confronting AI today: the risks to should develop capacity in AI measurement and
security and the looming threat of losing control evaluation for effective oversight.
over AI systems. He illuminated the difficulties in
training AI systems that are assuredly safe and the On regulatory measures, Prof. Bengio advocated for
ease with which malign actors could exploit open- strategies to mitigate risks associated with AI
source AI systems. Furthermore, he discussed the systems falling into the wrong hands. He suggested
scientific community's debate over AI systems implementing licensing regimes, reporting
potentially developing self-serving objectives, requirements, and auditing for the most powerful
deviating from human interests. Prof. Bengio pointed AI systems. He stressed the need for greater
out the current scientific limitations in ensuring that scrutiny before highly capable models are released
AI systems align with human intentions and interests, open source. Vulnerabilities in open-source models
which is evident in existing biases and discrimination can't be retroactively fixed throughout the value
in AI systems. chain once models have been downloaded. Prof.
Bengio underscored that decisions on releasing
Evaluating President’s Biden Executive Order on the these models should involve a democratic
Safe, Secure, and Trustworthy Development and evaluation process due to the significant global risks
Use of Artificial Intelligence, Prof. Bengio involved.
commended it as a pivotal step towards bolstering
AI governance. The order's approach to measuring Finally, Prof. Bengio called for
institutional
and evaluating AI systems based on their innovation in democratic processes to control AI
computational resources used in training was development. He proposed the formation of multi-
highlighted as a key development. He commented stakeholder governance bodies, comprising civil
that presently, compute utilization is a reasonable society, academics, and media, to oversee AI
proxy for models’ capability. The more capability, the development and ensure societal alignment.
more potential to create harm. However, we must

Prof. Bengio identified two primary technical challenges


confronting AI today: the risks to security and the
looming threat of losing control over AI systems.

| 25 |
SAFETY & SECURITY

KEYNOTE | U.S. Representative Anna Eshoo


Anna Eshoo | United States Representative

U.S. Representative Anna Eshoo, in her keynote, competitiveness, and improve society in
addressed the duality of AI as a source of both numerous ways. To this end, she deemed it critical
groundbreaking advancements and potential perils. that the U.S. Congress passes the CREATE AI Act, a
She emphasized the need for AI development to be bipartisan and bicameral piece of legislation that
safe, trustworthy, and responsible, highlighting the would establish the national AI research resource, a
importance of these qualities in the context of rapid shared cyber research infrastructure.
technological progress.
The convergence of biosecurity and AI is another
Representative Eshoo outlined three fundamental area of concern demanding regulation, stressed
requirements for AI research and development: Representative Eshoo. President Biden's Executive
access to good data, sufficient computing power, Order on the Safe, Secure, and Trustworthy
and skilled people. She argued against the Development and Use of Artificial Intelligence
monopolization of AI development by large directed agencies to conduct a study on how AI can
technology companies, advocating for a more increase biosecurity risks, aligned with concerns
inclusive approach. She emphasized that startups, brought to Congress by Representative Eshoo, such
small businesses, academia, medical and non-profit as the need for AI and biosecurity risk assessment.
communities, and the public sector should all have
access to essential AI resources. Finally, Representative Eshoo underscored national
security as Congress’s top priority. As AI doesn't
Representative Eshoo stressed that democratizing recognize any national boundaries, it is also
AI research and development would enable imperative to work on international coordination
researchers and innovators across the United to advance AI governance that reflects fundamental
States to develop AI tools that bolster our national values, protects our democracy, and respects the
security, advance safety and economic rule of law.

Representative Eshoo ... argued against the


monopolization of AI development by large
technology companies, advocating for a more
inclusive approach. She emphasized that
startups, small businesses, academia, medical
and non-profit communities, and the public sector
should all have access to essential AI resources.

| 26 |
SAFETY & SECURITY

PANEL | Managing Safety and Security of


Foundation Models
Irene Solaiman | Head of Global Policy, Hugging Face
Joslyn Barnhart | Senior Research Scientist, Strategic Governance Lead, Google DeepMind
Tom Goldstein | Volpi-Cupal Professor of Computer Science, University of Maryland; Co-PI, NIST-NSF TRAILS
Juraj Čorba | Digital Regulation & Governance Expert, Slovak Ministry of Investments, Regional Development
and Informatization; Chair-Elect, OECD AIGO
Stephanie Ifayemi (moderator) | Head of Policy, Partnership on AI

Main Takeaways
IMPLEMENT SAFETY TESTS AND EVALUATIONS THAT START AT THE DESIGN PHASE:
A consensus emerged around the urgent need for robust safety tests and evaluations for AI systems from an
early stage. Speakers stressed the importance of mitigating risks through thoughtful design and regulation and
the need for third-party model assessments.

ENGAGE A BROAD STAKEHOLDER COMMUNITY IN THE DEVELOPMENT OF MODEL EVALUATIONS,


RISK THRESHOLDS, AND THIRD-PARTY ASSESSMENTS:
Speakers noted the risk of regulatory capture in model evaluations and risk assessments, which should be
developed by a broad group of stakeholders, including civil society representatives.

FORMALIZE A COMMON DEFINITION OF HIGH-RISK OR FRONTIER AI MODELS:


Solaiman and Dr. Barnhart highlighted the difficulties in establishing clear thresholds and criteria for AI models.
This includes the challenges in distinguishing between different types of models and the subjective nature of
model evaluation. There must be an inclusive, interdisciplinary, and ongoing process to develop a definition of
high-risk or frontier AI models.

SET ENFORCEABLE MECHANISMS FOR AI SAFETY:


Governments have a responsibility to operationalize AI safety through the investment and implementation of
mechanisms that ensure that AI systems are safe. Specifically, governments should establish AI safety
institutions and invest in sociotechnical risk-mitigation tools and evaluations.

| 27 |
SAFETY & SECURITY

Discussion
Policymakers across jurisdictions have been asking laboratory level. Notably, he called for platform
themselves which tools they should employ to moderation—the detection and labeling of
evaluate safety and security in AI systems. This generative AI content, to foster public awareness of
session brought to light the varied and complex content authenticity and AI systems’ capabilities. He
aspects of managing the safety and security of suggested tech companies should be proactive in
foundation models, stressing the need for implementing those measures, with a consortium of
collaborative and multifaceted approaches involving major players in the information ecosystem.
regulation, policy frameworks, technical solutions,
and stakeholder engagement. Dr. Joslyn Barnhart echoed Prof. Goldstein’s remark
on the unavoidable nature of adversarial attacks in
Drawing from his technical expertise, Prof. Goldstein AI. She pointed out the need for society-wide
reminded the audience that the ability to jailbreak AI consensus in balancing the benefits and risks of AI
systems does not inherently signal a lack of security. technologies. As consensus emerges around the
Although all systems are vulnerable to breaches, two most dangerous risks, we must identify models likely
dimensions of risk prevention must be to cross those red lines through a set of specific
operationalized across the industry: precautionary criteria and apply rigorous scrutiny through
measures applied in the design and development sociotechnical assessments and safety tests for
stages, and a contextual approach toward risk foundation models.
assessment at the application level, to cover risks
related to deployment. To increase safety and Analyzing concrete measures to establish red lines,
security measures in the design phase of AI systems, Dr. Barnhart highlighted the challenges licensing
Prof. Goldstein suggested learning from similar risk may pose to new market entrants and stressed the
management strategies applied to other, more centrality of model evaluation in AI policy. She
traditional, softwares. In doing so, policymakers underscored the need for academics and civil
should bear in mind that foundation models, if society to contribute to inclusive third-party
compromised, could have extensive negative assessments, especially of foundation models.
impact, due to their widespread use in applications Finally, Dr. Barnhart acknowledged the increasing
across domains. role of governments in AI governance, motivated
by public demand and industry's need for legal
Prof. Goldstein outlined strategies to mitigate harms clarity. She stressed governments’ responsibility to
associated with AI, beyond technical solutions at the invest in safety and public education.

Solaiman pointed out the current inadequacies in


evaluation techniques and the lack of consensus
on definitions of “safety” within the AI community
... [she] called for more comprehensive criteria for
risk assessment of large models ... relying on
computational power as a risk threshold, though
useful and important as a first step, will be
insufficient in the long run.
| 28 |
SAFETY & SECURITY

Irene Solaiman focused on the challenges of misuse industry-led AI narratives to governmental initiatives
and unintentional misuse in AI models. She drew a in defining AI governance. Čorba expressed
critical distinction between models accessible concerns over regulatory capture in certain
through APIs and models with open weights, noting jurisdictions if big tech companies are allowed to
the distinct risks each type presents. Solaiman set standards for foundation AI models. Analyzing
underscored the difficulties in establishing clear risk the best approach to governing the safety and
thresholds for these models and called for extensive security risks of foundation models, he drew a
work to build robust policies spanning the gradient parallel with the crypto sector, suggesting that
of model release methods, from proprietary to emerging technologies should be integrated into
open-source. Meanwhile, we should also implement existing regulatory frameworks, rather than
specific policies to govern the use of generative AI completely revamping them.
to preserve academic integrity.
Čorba also discussed the role of voluntary
She pointed out the current inadequacies in commitments in AI governance. Although valuable in
evaluation techniques and the lack of consensus on initiating discussions, they are undoubtedly
definitions of “safety” within the AI community. insufficient for holistic and effective governance. He
Advocating for a collaborative approach to tackling highlighted the varying approaches to AI
unknown risks, Solaiman called for more governance across different jurisdictions and the
comprehensive criteria for risk assessment of large importance of considering both technological and
models, encompassing sociotechnical aspects. societal factors. Čorba called for a shared
Relying on computational power as a risk threshold, commitment across jurisdictions to adopt a proactive
though useful and important as a first step, will be stance to AI governance: rather than focusing solely
insufficient in the long run. on technology, policies should also influence
societal behaviors and values to steer AI
Juraj Čorba provided insights into the evolving AI development towards the common good.
policy landscape, highlighting the shift from

In 2023, Mistral AI, a French AI startup, released an open-source language model that will
provide detailed instructions for suicide, killing one's spouse and acquiring class-A drugs.
Which of the following responsibilities should apply to developers?

They should bear no responsibility for how the open-


source model is used or modified
24%
24%
They should implement some form of content
moderation within the original code, even if it can be
modified by others
4%
They should publish ethical guidelines for using and
modifying the model but not enforce them
programmatically
24%
They should actively monitor and attempt to policy all
modified versions of the model for harmful content
24%

None of the above

| 29 |
SAFETY & SECURITY

FIRESIDE CHAT | DHS Priorities for AI Governance


Robert Silvers | Under Secretary, U.S. Department of Homeland Security
Nicolas Miailhe (moderator) | Founder, The Future Society

Main Takeaways
DEPLOY PERIODIC IMPACT ASSESSMENTS AND OVERSIGHT PROCESSES TOWARD THE USE OF AI BY
LAW ENFORCEMENT:
AI technologies can be leveraged to improve the performance of law enforcement in fulfilling their statutory
obligations. Considering the level of uncertainty and lack of regulation of AI technologies presently,
establishing oversight mechanisms with direct participation of civil society is crucial for fostering trust in public
institutions and increasing democratic resilience.

DRIVE TALENT AND INVESTMENT FOR LAW ENFORCEMENT TO FIGHT THE USE OF AI IN CRIMINAL
CYBER ACTIVITIES:
Law enforcement agencies require cutting-edge technical tools and expertise to develop efficient strategies to
curb the rapidly expanding use of AI in criminal activities. Governments must direct funding and talent to those
efforts, while also ensuring strict oversight of agencies’ use of AI.

INCLUDE CIVIL SOCIETY REPRESENTATIVES IN GOVERNANCE BOARDS AT LAW ENFORCEMENT


AGENCIES:
Participation and decision-making power in governance boards allow for civil society to influence internal
operations, practices, and policies related to the use of AI, including the power to establish red lines. This
would increase the transparency of law enforcement activities and contribute to social acceptance of AI’s use
in law enforcement.

| 30 |
SAFETY & SECURITY

Discussion
National security and law enforcement institutions private data and securing reliable networks in global
around the world encounter a double-edged sword operations, companies must align their operations
in responding to AI’s impact: while it enables new, with diverse regulatory regimes.
large-scale threats to countries and their
populations, it also presents state forces with A unified global response to AI challenges would
sophisticated technological tools that could facilitate alleviate the burden of cross-border operations,
the fulfillment of their mandate. This fireside chat, benefit companies, and improve security across the
moderated by The Future Society’s founder, Nicolas value chain.
Miailhe, featured insightful remarks from the
Department of Homeland Security’s Under Under Secretary Silvers also discussed the Biden
Secretary, Robert Silvers, on the evolving role of AI administration's efforts to harness AI for public
in cybersecurity and governance. safety, including detecting illegal substances and
products made with forced labor through AI-
Under Secretary Silvers highlighted the increasing enabled supply chain mapping. He emphasized
use of AI to automate cyber attacks, with DHS’s commitment to responsible AI use, ensuring
sophisticated techniques that make it more dificult privacy, bias mitigation, and civil rights are central to
for law enforcement to detect scams. Conversely, AI algorithmic decision-making.
is also a powerful tool for cyber defense, offering
innovative ways to protect against these advanced Finally, Under Secretary Silvers emphasized the
threats. This dual role underscores an emerging need to transform voluntary industry
arms race in the cyber domain, where both attackers commitments into codified regulations, policies, or
and defenders leverage AI capabilities. treaties. He highlighted the formation of DHS’s
Artificial Intelligence Safety and Security Board, a
Addressing the border-agnostic nature of digital blend of federal leads, industry experts, and
security challenges, Under Secretary Silvers academics tasked with developing best practices for
stressed the importance of international AI safety and security.
collaboration in AI governance and regulatory
harmonization. To ensure consistency in protecting

Under Secretary Silvers highlighted the


increasing use of AI to automate cyber attacks ...
Conversely, AI is also a powerful tool for cyber
defense, offering innovative ways to protect
against these advanced threats ... This dual role
underscores an emerging arms race in the cyber
domain, where both attackers and defenders
leverage AI capabilities.

| 31 |
SAFETY & SECURITY

ROUNDTABLE DIALOGUE | Navigating AI Deployment


Responsibly: Open-Source, Fully-closed, and the
Gradient in Between
Alyssa Ayres | Dean, George Washington University Elliott School of International Affairs
Nicolas Miailhe | President and Founder, The Future Society
Luis Aranda | AI Policy Analyst, OECD.AI
Anthony Aguirre | Executive Director & Secretary of the Board, Future of Life Institute; Professor of Physics,
University of California, Santa Cruz
Russell Wald | Deputy Director, Stanford Institute for Human-Centered AI (HAI)
Elizabeth Seger | Research Scholar, Centre for the Governance of AI (GovAI)
Peter Cihon | Senior Policy Manager, GitHub Heather Frase | Senior Fellow, Georgetown’s Center for Security
and Emerging Technology (CSET)
Ian C. Haydon | Science Communicator, Institute for Protein Design

The practice of “open-sourcing” technologies has The speakers presented short remarks
been a subject of both admiration and criticism. On contextualizing the state of AI research and practices
the one hand, it has allowed for “democratization” and the role of open-source in the AI ecosystem.
and inclusivity in technological developments, and These remarks were then followed by group
for software robustness through community-driven discussions and a debriefing session.
inspection and audits, red-teaming, and bug
detection. On the other hand, it allows for these Several speakers challenged the notion of a
technologies—harboring unknown and potentially binary between “open” and “closed” models,
hazardous capabilities—to be more readily misused. pointing toward a spectrum of options regarding
As AI systems become more capable, the potential the level of access to system components such as
for their misuse and harm, such as risks to datasets, code, model cards, and model weights.
cybersecurity and biosecurity, grows Given their widespread use and potential for both
correspondingly. benefit and harm, the release strategies of recently
developed large language models were compared.
This interactive roundtable dialogue brought Biological design tools, which offer groundbreaking
together over 100 AI policy experts to brainstorm medical solutions but also present biosecurity risks,
actionable recommendations for adapting release were also discussed as a use case of interest.
strategies for powerful AI systems.

On the one hand, [“open-sourcing”] has allowed


for “democratization” and inclusivity in
technological developments, and for software
robustness through community-driven inspection
and audits, red-teaming, and bug detection.
On the other hand, it allows for these
technologies ... to be more readily misused.

| 32 |
SAFETY & SECURITY

Discussions probed into the jurisdictional challenges capabilities and generality of AI systems. Licensing
of governing the release of models. Participants emerged as a key mechanism, with some
acknowledged that wide sharing of model weights discussants proposing a centralized authority or a
can make it difficult, if not impossible, to trace and consortium for overseeing a model testing
attribute instances of misuse, and thereby seek process prior to open-source release. Some
redress in such cases. Some pointed out that discussants also stressed that the global majority
transparency does not necessarily have to mean should be appropriately represented in such
granting full access to the model, but stressed that governance processes. The idea of an international
closed models must also be expected to adhere to mechanism, possibly akin to a CERN for AI, was
rigorous transparency requirements, including proposed, focusing on beneficial applications and
assessments by third parties. Some discussants establishing a new social contract with internationally
saw promise in a risk-based approach, combining accountable governance.
national mechanisms such as licenses and global
UN-sanctioned certification, to regulate the Suggested elements toward more robust
deployment of closed models with potential for governance of open-source AI included external
tangible harmful outcomes. expert-led red-teaming, government-funded audits,
and incident reporting.
Discussions underscored the importance of
considering a liability framework based on the

| 33 |
SAFETY & SECURITY

REMARKS | Audrey Plonk


Audrey Plonk | Head of Division, Digital Economy Policy, OECD

Audrey Plonk provided remarks focused on recent Regardless of the form an international institution
developments in AI safety and the OECD’s may take, international norms remain crucial to
dedication to international coordination in AI promote AI safety, robustness, trustworthiness,
governance. In the past few months, the organization and human rights. While the OECD AI Principles lay
took part in key forums, such as the G7 ministerial a foundational framework, she acknowledged the
meeting on the Hiroshima AI process, the UK AI need for additional measures as AI technologies
Safety Summit, and its own multistakeholder network proliferate. Plonk stressed that the OECD is
of AI experts. developing responsible business conduct guidelines
for AI, aiming for flexible yet enforceable
Plonk observed that AI safety has transitioned from mechanisms to guide AI companies operating
a specialized technical concern to a top priority for internationally and address AI-related disputes
governments worldwide. This shift has sparked through mediation.
debates on the necessity of an international
governance regime for advanced foundation Additionally, Plonk highlighted the launch of the
models. In this sense, comparisons with institutions OECD AI Incidents Monitor, a tool to monitor global
like CERN, the IAEA, and the IPCC have been news in real time to detect and classify AI-related
increasingly drawn. incidents, offering a vital resource for international
risk management and data-driven policymaking

| 34 |
MEASUREMENT & STANDARDS

MEASUREMENT & STANDARDS

KEYNOTE | Dr. Erwin Gianchandani


Dr. Erwin Gianchandani | Assistant Director for Technology, Innovation and Partnerships, U.S. National
Science Foundation

Dr. Gianchandani presented the National Science leadership in the pilot implementation of NAIRR
Foundation’s (NSF) role in driving AI innovation in the (National AI Research Resource) to expedite
US. He highlighted how the NSF is advancing its resource accessibility for the research community to
mission with the new directorate for technology, address those societal challenges. In addition, he
innovation, and partnerships, aimed at equipping outlined the NSF’s efforts in funding foundational AI
researchers, startups, and entrepreneurs with research and its dedication to addressing current
resources to translate ideas into societal benefits. and future risks.

Dr. Gianchandani noted that accelerating research is Collaborative and interdisciplinary partnerships are
key to leveraging AI’s transformative potential at the heart of NSF’s approach to AI governance.
responsibly. AI models’ escalating capabilities The Foundation has collaborated with NIST in
have the potential to accelerate scientific establishing the Institute for Trustworthy AI in Law
discoveries, provide solutions to societal and Society (TRAILS)—a co-host of The Athens
challenges, and reshape how we interact with Roundtable—and created the National AI Research
technology. Dr. Gianchandani stressed NSF's Institutes program

| 35 |
MEASUREMENT & STANDARDS

PANEL | Decoding AI: Challenges in Classification,


Measurement, and Evaluation
Elham Tabassi | Associate Director, Information Technology Laboratory and Chief AI Advisor, U.S. NIST
Jared Mueller | Head of External Affairs, Anthropic
Sebastian Hallensleben | Chair of JTC 21, CEN, CENELEC
Emmanuel Kahembwe | CEO, VDE UK
David Broniatowski (moderator) | Associate Professor, The George Washington University; Co-PI and GW Site
Lead, NIST-NSF TRAILS

Main Takeaways
BROADEN THE SCOPE OF AI EVALUATIONS TO INCLUDE SOCIETAL ROBUSTNESS AS A KEY METRIC:
Governments must foster interdisciplinary approaches focused on the safety and societal implications of AI
systems. Ensuring that AI systems are developed and deployed with a comprehensive understanding of their
wider impacts will only be possible with a broader pool of stakeholders and impacted communities participating
in standard-setting. It’s crucial that this work be developed in coordination with various AI safety institutes
globally to share and implement best practices.

DEVELOP UNIFORM METRICS, METHODOLOGIES, TRANSPARENCY REQUIREMENTS, AND REPORTING


STANDARDS TO FACILITATE THE COMPARISON AND ASSESSMENT OF AI SYSTEMS ACROSS
DIFFERENT DOMAINS:
Speakers highlighted how crucial interoperability is in standardizing evaluation processes and making them
more transparent and effective.

ALLOCATE RESOURCES TO STANDARD-SETTING COMMITTEES TO ENSURE BROADER PARTICIPATION


FROM A DIVERSE ARRAY OF STAKEHOLDERS:
This approach would enable more equitable representation and input in the standard-setting process, including
with academia and civil society representatives, ensuring that the standards developed are reflective of a wider
range of perspectives and needs. Inclusion is crucial given the cross-jurisdictional deployment of AI models and
their disproportionate impact on the global majority.

| 36 |
MEASUREMENT & STANDARDS

Discussion
Definitions, metrics, benchmarks, and evaluations system evaluations primarily focus on technical
play a crucial role in the governance of advanced AI robustness—a relatively urgent priority. She stressed
systems. In this session, AI experts delved into that methods should assess AI systems in their
established and emergent challenges in real-world contexts in a scientifically accurate and
classification, measurement, and evaluation, reproducible manner, acknowledging the
proposing concrete measures to achieve complexity of today's technology.
scientifically credible and robust tools and processes
for AI governance. Emmanuel Kahembwe added to this discussion by
emphasizing the limitations of the current training of
Sebastian Hallensleben opened the discussion by technical AI experts. Such professionals often
exploring the evolving nature of AI terminology, receive training focused on a narrow set of systems
highlighting the lack of agreed-upon definitions for (often limited to those that they develop and deploy),
terms like "foundation models" and "generative AI." with an emphasis on technical performance metrics.
He emphasized the importance of differentiating He further noted that governments should facilitate
between raw models like GPT-4 and more coordination between AI Safety Institutes to share
application-oriented systems like ChatGPT, noting and implement best practices.
how these distinctions influence AI governance.
He called for a common understanding of concepts Jared Mueller addressed the scrutiny required to
like trust, truth, and facts, especially in the context of ensure the safety of large AI models, acknowledging
generative AI's impact on consumer applications and that while computational resource utilization
societal challenges. (floating-point operations per second, or “FLOPS”)
may not be a perfect measure, it currently serves as
Elham Tabassi emphasized the evolution of NIST's the best available standard. He also highlighted the
approach to AI measurement and evaluation, risk of regulatory capture and the need for diverse
particularly following the comprehensive Executive expertise in evaluating large models. Mueller
Order 14110 from October 2023. She highlighted underscored the importance of including a broad
NIST's role in developing guidelines for evaluating range of specialists—from civil society to
potentially harmful AI systems, including red- government experts—beyond governance and
teaming strategies, and in creating test policy professionals, to comprehensively cover the
environments in collaboration with other expanding risk profiles in the field of AI.
agencies. Tabassi pointed out that current AI

Mueller underscored the importance of including


a broad range of specialists—from civil society to
government experts—beyond governance and
policy professionals, to comprehensively cover
the expanding risk profiles in the field of AI.

| 37 |
MEASUREMENT & STANDARDS

Delving into the intricacies of consensus-building developments, often lack the resources to engage in
among diverse stakeholders, speakers highlighted voluntary standard-setting processes. To address
the need to broaden the range of expertise and this gap, Kahembwe proposed reevaluating the
backgrounds involved in standard-setting, voluntary aspect of standards-setting activities
recommending the inclusion of communities and allocating resources to remunerate
impacted by AI technologies. Integrating diverse participants. Furthermore, speakers emphasized the
insights from the onset would contribute to more importance of effectively translating consensus into
holistic and impactful AI standards. Drawing from his clear, technically useful documentation. This
role at CEN-CENELEC Joint Technical Committee 21 approach ensures that AI ethics standards are not
on AI (JTC21), Dr. Hallensleben stressed that these only comprehensive and representative but also
committees bear the responsibility of actively practically useful for programmers and engineers.
reaching out to ensure diverse participation. While it
is challenging to achieve consensus with a large and Finally, speakers analyzed the role of standards in
diverse pool of stakeholders, a diversity of shaping not only industry practices but also legal
perspectives tends to enhance the quality and outcomes in cases involving AI technologies. As
applicability of the standards. In this sense, standards gain strength and legitimacy, they could
standards committees based in the Global North increasingly play a pivotal role in judicial cases and
should not overlook the need for representation arbitration. Legal practitioners and judges might be
from the Global South if they aim to have more inclined to rely on these standards in their
international applicability. rulings and to consider expert witnesses familiar with
these benchmarks. This potential judicial reliance on
Looking at practical challenges, speakers identified standards underscores the need for them to be well-
the voluntary nature of participation as a critical established, legitimate, and reflective of broad
barrier to inclusivity in standards-setting. expert consensus.
Stakeholders, particularly those most impacted by AI

| 38 |
MEASUREMENT & STANDARDS

Which term do you


believe is most
23% Foundation model
appropriate as the
'object of regulation' Generative AI
for laws and guidelines 14% 9%
concerning advanced General-purpose AI
AI systems? 0%
6% It is context-dependent

Not sure / none of the


above
48%
I do not believe AI should
be regulated

What do you expect to


be the biggest Keeping pace with
challenge in 31% 10% technological development
developing standards
International coordination
for AI systems? and agreement
10%
Balancing innovation with
regulation

14% Addressing ethical and


societal impacts

35%
Building consensus
among stakeholders

Do you approve of the


Biden Administration's Yes, to manage risks
Executive Order 31% effectively
mandating that 17% Yes, but with specific
developers of exemptions for research
foundation models that No, this requirement isn't
exceed a compute strict enough
threshold must submit No, it hinders innovation
15%
detailed reports to the and competition
US government? Unsure / Need more
31% 6% information

| 39 |
MEASUREMENT & STANDARDS

REMARKS | John C. Havens


John C. Havens | Regenerative Sustainability Practice Lead, The IEEE Standards Association

John C. Havens provided remarks on leveraging AI Indigenous Peoples’ Data. Furthermore, IEEE
for long-term human and planetary well-being. He developed the Planet Positive 2030 program
noted that inclusion, sustainability, and equal reflecting a commitment to regenerative
opportunity are at the core of long-term human sustainability, which seeks to foster a net positive
flourishing. This perspective is notably reflected in impact on the planet.
the UN Sustainable Development Goals (SDGs) and
the OECD Better Life Index. Drawing from those Highlighting the urgent need to protect younger
initiatives, Havens underscored that, in the age of AI, generations and their future, Havens advocated for
it’s crucial to extend our developmental new standards in age-appropriate design and
understanding beyond traditional economic metrics sustainability, emphasizing the inclusion of
such as GDP. children and future generations in technology
innovation. Havens concluded by challenging the
Havens highlighted IEEE’s role in steering AI predominance of Western rationality in AI
governance in that direction, referencing the IEEE governance, and advocating for values like
7010 standard for well-being impact assessment of relationality and community care to guide AI
AI systems (2020) and the pioneering work on development
Recommended Practice for the Provenance of

| 40 |
MEASUREMENT & STANDARDS

REMARKS | Margot Skarpeteig


Margot Skarpeteig | Program Manager, Human Rights, Empowerment and Inclusion, The World Bank

Margot Skarpeteig reflected on the 75th anniversary She highlighted the efforts of the Human Rights Trust
of the Universal Declaration of Human Rights against Fund, which supports World Bank staff in
the backdrop of significant technological understanding the intersection of human rights and
advancements, particularly in AI. She emphasized development in their operations and analytics.
the challenges posed by these advancements,
noting how the potential of AI as a force for Furthermore, Skarpeteig discussed The World
positive change is currently overshadowed by Bank's initiative to develop a comprehensive
threats to human dignity and agency. framework for identifying and mitigating the human
rights risks associated with AI in their institution’s
Skarpeteig underscored The World Bank's operations.
awareness of its crucial role in upholding human
rights within the global digital marketplace.

| 41 |
REGULATION & ENFORCEMENT

REGULATION & ENFORCEMENT

KEYNOTE | U.S. Senator Richard Blumenthal


Richard Blumenthal | United States Senator

Reflecting on the burgeoning influence of AI in 2023, AI Act. This framework proposes establishing a
Senator Blumenthal underscored the significant licensing regime for entities engaged in high-risk
impact AI has on the economy, safety, and AI development and creating an independent
democracy. He cautioned against Congress oversight body with AI expertise. It lays out specific
repeating past errors seen in the technological principles for upcoming legislation aimed at
revolutions of the previous decade, particularly protecting national and economic security, enforcing
referencing the challenges faced with the rapid transparency about AI model limitations and uses,
growth of social media. Senator Blumenthal noted protecting consumers and children, and
Congress's failure to act in the past, which led to the implementing rules like watermarking, disclosure of
rise of monopolistic companies wielding AI usage, and data access for researchers.
disproportionate power. Furthermore, the framework addresses the
accountability of AI companies, holding them
Drawing from his experience as chair of the Judiciary liable for privacy breaches, civil rights violations,
Subcommittee on Privacy, Technology, and the Law or other harms.
in 2023, he shared insights from witness testimony
by industry leaders—including OpenAI CEO Sam Reflecting on international developments, Senator
Altman, Anthropic CEO Dario Amodei, and Microsoft Blumenthal highlighted the significance of the EU's
President and Vice Chair Brad Smith—who, unlike AI Act, lauding it as a groundbreaking effort that sets
social media executives in the past, expressed a baseline rules and standards for AI, and providing a
unanimous call for AI regulation. valuable model for AI regulation akin to the EU's
initiatives in privacy, competition, and online safety.
In August, Senator Blumenthal and Senator Hawley
announced a bipartisan framework for a U.S.

| 42 |
REGULATION & ENFORCEMENT

KEYNOTE | U.S. Senator Brian Schatz


Brian Schatz | United States Senator

Senator Brian Schatz's keynote addressed the cautioned against oversimplified statutory
critical issue of regulating dual-use foundation frameworks that could be manipulated by tech
models at the federal level in the United States. corporations. Senator Schatz stressed the
He emphasized the vital role of the federal importance of a nuanced and adaptable regulatory
government in this endeavor while acknowledging framework capable of addressing the multifaceted
the current lack of a unified approach. challenges posed by AI technologies.

Senator Schatz critiqued traditional regulatory Focused on the immediate steps necessary in AI
methods, which typically either address harms on a regulation, Senator Schatz proposed requiring clear
case-by-case basis or establish an extensive list of disclosure when online content is machine-
statutory provisions, which would be ineffective for generated, which would enhance transparency and
the rapidly evolving field of AI. He stressed the need accountability in the digital realm. Furthermore, he
for the US to develop basic, common-sense, emphasized the urgent need for regulations
future-proof principles that encourage developers concerning the use of data in training AI models,
and deployers to innovate responsibly. advocating for a duty of care from data collectors
towards individuals whose data is being utilized.
Stressing the crucial role of enforcement, Senator
Schatz highlighted the role of federal agencies, but

| 43 |
REGULATION & ENFORCEMENT

KEYNOTE | U.S. Senator Amy Klobuchar


Amy Klobuchar | United States Senator

Legislative guardrails are essential not only to and proprietary work. She stressed the importance
safeguard consumers and intellectual property but of protecting local news organizations, for instance,
also to preserve the very foundations of democracy. from undue reproduction and use of training data
Senator Amy Klobuchar’s keynote underscored the without compensation by large platforms. The
urgent need for legislative guardrails for generative Journalism Competition and Preservation Act, as
AI and the importance of international coordination. she mentioned, aims to empower local news outlets
to negotiate fair compensation for their content—an
Increasingly sophisticated AI-generated content can issue closely related to information integrity and trust
spread misinformation related to elections, such as in information ecosystems and democratic
inaccurate information about voting logistics, posing institutions.
concrete risks to democratic processes and the
upcoming election in the United States. Senator Taking the discussion back to power dynamics and
Klobuchar highlighted key bipartisan efforts to democratic control over AI, Senator Klobuchar
combat the growing threat of deepfakes in U.S. emphasized the need to modernize U.S.
electoral processes. She discussed the need to competition laws to address the unique
confront deceptive practices while ensuring free challenges posed by the concentration of power
speech—an approach encapsulated in the in the AI landscape and called for legislation to
Deceptive AI Act, aimed at curbing the use of ensure transparency and accountability,
fraudulent content in political advertising. particularly for high-risk AI applications. Finally,
recognizing that AI’s challenges transcend national
Highlighting another critical issue with regulating borders, Senator Klobuchar advocated for global
generative AI and protecting the information cooperation in developing and harmonizing AI
ecosystem, Senator Klobuchar advocated for the governance frameworks to effectively address
protection of individuals and content creators these universal challenges.
against the unauthorized use of their voice, likeness,

| 44 |
REGULATION & ENFORCEMENT

FIRESIDE CHAT | Regulating AI across its value chain


Addie Cooke | Global AI Policy Lead at Google Cloud, Google
Cameron Kerry | Ann R. and Andrew H. Tisch Distinguished Visiting Fellow, Center for Technology Innovation,
Brookings Institution; Former General Counsel and Acting Secretary, U.S. Department of Commerce
Anna Gressel (moderator) | Counsel, Paul, Weiss

Main Takeaways
BALANCE HORIZONTAL FRAMEWORKS WITH SECTOR-SPECIFIC REGULATION:
Stakeholders must advance discussions around the legal considerations in allocating liability along the AI value
chain to develop robust and legally sound doctrine and policy. Emerging AI liability regimes should consider
existing regulatory frameworks and, when appropriate, complement them, such as with contractual, legal, and
regulatory liability in different sectors.

DEVELOP INTERNATIONAL STANDARDS AND MECHANISMS FOR INCREASED CORPORATE


TRANSPARENCY:
Speakers converged on the need for industry-wide standards independent of binding regulation, alongside
investments in model monitoring tools, transparency requirements, incident reporting protocols, and auditing by
independent third parties, to ensure AI development and deployment and is ethical and preserves public trust.

| 45 |
REGULATION & ENFORCEMENT

Discussion
Liability, often defined in contracts between value Moderator Anna Gressel touched upon the evolving
chain actors, is increasingly being considered at the nature of liability in the AI sector. Beyond regulation
regulatory level as the consequences of AI systems of dual-use foundation models, product liability
grow more severe. This fireside chat delved into the should also be considered in the US, following
complexities of regulating AI across its value chain. Europe's lead on the matter. Given the current
scenario of divergent regulatory proposals and self-
Acknowledging that regulation will be crucial for governance approaches, panelists underscored the
both risk mitigation and industry innovation, this importance of developing and implementing
discussion highlighted the significant challenge of standards, as set by organizations like ISO and IEEE,
allocating responsibility within the AI value chain. to harmonize approaches and reduce compliance
Discussions spanned governance approaches for costs across jurisdictions.
accountability and liability, how supply chain actors
are reacting to the regulatory trends, and the Focusing on corporate responsibility, Cameron Kerry
balancing act of advancing responsible AI across highlighted the need for companies to invest in
jurisdictions. transparency, incident reporting, and thorough
auditing processes regardless of binding
Speakers emphasized the urgency of aligning risk regulation. Kerry drew an analogy to the need for
assessment and compliance with regulatory careful planning and diligent, repeated
trends, as the cost of non-compliance increases measurement in the practice of carpentry,
with every major jurisdiction that enacts AI emphasizing the need for diligence and precision in
regulations. A key point of discussion was the AI regulation and deployment.
regulation of AI developers and deployers and the
varying levels of risks associated with different sizes Addie Cooke pointed out the increasingly relevant
of AI models, particularly in the context of generative role of model monitoring tools in the industry,
AI applications. The conversation touched upon how which can help raise the technical bar for risk
the EU AI Act might influence regulatory approaches assessment. She noted that evaluations should be
to foundation models in other jurisdictions. done at different stages of the value chain and
praised NIST’s Risk Management Framework for its
adaptability and usefulness for the industry.

Given the current scenario of divergent regulatory


proposals and self-governance approaches,
panelists underscored the importance of
developing and implementing standards, as set
by organizations like ISO and IEEE, to harmonize
approaches and reduce compliance costs across
jurisdictions.

| 46 |
REGULATION & ENFORCEMENT

FIRESIDE CHAT | Coordinated approaches for


AI governance
Dragos Tudorache | Member of the European Parliament (pre-recorded remarks)
Lynne E. Parker | Associate Vice Chancellor and Director of the AI Tennessee Initiative, University of
Tennessee, Knoxville
Marek Havrda | Deputy Minister for European Affairs, Office of the Government of the Czech Republic
Nicolas Moës (moderator) | Director, European AI Governance, The Future Society

Main Takeaways
DEVELOP AND IMPLEMENT A SET OF REGULATORY TOOLS TO OPERATIONALIZE SAFETY BY DESIGN:
Such tools must be interoperable across jurisdictions, given the borderless character of the foundation models
value chain. Regulators should invest in regulatory sandboxes to rigorously test and refine foundation models
pre-deployment. This effort should be informed by comprehensive regulatory guidance, global metrics,
industry-wide standards, and interoperable benchmarks.

STRENGTHEN CROSS-BORDER INFORMATION-SHARING BETWEEN REGULATORS:


This is key to harmonize approaches to AI governance between the EU, US, and other global partners. This
effort should include sharing best practices, and knowledge critical for tackling challenges related to
enforcement.

ENHANCE INVOLVEMENT OF DIVERSE STAKEHOLDERS IN AI GOVERNANCE TO GATHER ROBUST


EVIDENCE ABOUT AI’S IMPACT:
Inclusion can be operationalized through advisory panels, public forums led by civil society, and other methods
of obtaining continuous feedback from underrepresented communities.

| 47 |
REGULATION & ENFORCEMENT

Discussion
Over the course of 2023, increasing market demand Dr. Lynne Parker offered insights into how the EU's
for AI has pushed the public interest to the margins. regulatory path might have influenced the U.S.
However, regulatory developments have provided a government’s approach to AI governance. When it
democratic path to balance stakeholders’ power and comes to narrow systems, the sectoral-based
protect the public interest: the European Union's AI approach discussed in the EU resonates with the US
Act. In its final stages, it represents a robust effort to regulatory structure, comprising different agencies
regulate AI models increasingly prevalent in with expertise in different economic sectors. Those
consumer markets and impose guardrails that agencies are already studying or investigating the
uphold safety and fundamental rights. Nevertheless, impact of AI within their mandates. Dr. Parker
ongoing work is necessary to ensure the Act's suggested that federal institutions are well-
strength and enforceability and to transmit regulatory equipped to take up a two-pronged approach:
lessons to other jurisdictions. The role of institutions sector-specific regulations coupled with a
responsible for enforcement at the national level— comprehensive AI governance framework, such as
such as the EU AI Act’s European AI Office—is the work the U.S. executive branch has been
crucial in this regard. This fireside chat focused on advancing since the publication of the Blueprint for
the critical role of coordination between AI policy an AI Bill of Rights and, more recently, Executive
enforcement bodies, particularly across those of the Order 14110.
EU and the US.
Moving the discussion from executive powers to
In opening remarks to the panel, MEP Dragos legislative powers, Marek Havrda articulated the
Tudorache shared insights on the trilogue process challenges in transitioning AI legislation from theory
for the EU AI Act, underscoring the importance of to practice, with a particular focus on the role of a
learning from the EU regulatory journey and European AI Office. This institution would have a
reflecting on upcoming challenges for enforcement. central role in gathering intelligence around the
The international community must ramp up a regulatory learning stemming from the oversight in
coordinated approach to maximize countries’ member states’ jurisdictions and from regulatory
capacity for enforcement, be it within the EU AI Act sandboxes—should the provisions be approved in
jurisdictions or beyond EU borders as regulations the final text of the EU AI Act. Finally, looking into the
emerge in other countries. Executive’s role, Havrda underscored the
importance of coordination among national AI
offices

Discussions tend to focus on the downstream


impact of AI applications on society, such as with
the deployment of surveillance technologies, but,
speakers remarked, it is urgent to rein in
corporations’ actions during the design and
development of AI systems, rather than focusing
solely on deployment.

| 48 |
REGULATION & ENFORCEMENT

for consistent enforcement across the EU, overshadowing complex yet crucial AI discussions.
especially with respect to high-risk systems. If Foundation models have profound implications on a
successful, he remarked, this model could be wide array of downstream applications affecting the
extended to international collaborations. global population. Discussions tend to focus on the
downstream impact of AI applications on society,
Shifting the lens to the global stage, the G7 such as with the deployment of surveillance
Hiroshima Code of Conduct was identified as a technologies, but, speakers remarked, it is urgent to
significant step in guiding the behavior of foundation rein in corporations’ actions during the design and
model developers. However, there remains an development of AI systems, rather than focusing
urgent need for binding regulations, like the EU AI solely on deployment.
Act, to manage and mitigate systemic risks
effectively. As the world turns its eyes to the profound and
borderless impact of some foundation models, the
As we broaden the debate around AI governance concept of “safety by design” is crucial in mitigating
and enforcement to include underrepresented systemic risks at the initial stages of AI
voices and increase democratic participation, development..
speakers discussed that we must be cautious about

In which area is europe most likely to shape the future of AI?

AI safety
42%

Access to talent pool


3%
Generative AI/foundation
models
15%
AI for narrow/high-risk use
14% cases

Diplomacy (e.g. mediating


between the US and China)
26%

| 49
47 |
REMARKS BY CO-HOSTS

Remarks by Co-hosts

REMARKS | Ambassador Ekaterini Nassika


Ekaterini Nassika | Ambassador of the Hellenic Republic to the USA

The Ambassador for Greece in the United States highlighted Greece’s leadership in advancing the rule of law
in the age of AI. As AI will bring about a transformative moment for humanity, it will be crucial to leverage its
potential to promote stronger democracies. The Ambassador also acknowledged the challenges
accompanying such technology, noting that AI must also be tamed so as not to endanger the values of
democracy and the rule of law.

| 50 |
REMARKS BY CO-HOSTS

REMARKS | Stefanos Vitoratos


Stefanos Vitoratos | Co-Founder, Homo Digitalis

Drawing from the Hellenic democratic tradition, Stefanos Vitoratos stressed that the preservation of our
fundamental values should be prioritized and reflected in the core of AI development endeavors.
Acknowledging the importance of democratic debate, Vitoratos commended the kaleidoscope of perspectives
presented by legislators, policymakers, law practitioners, civil society representatives, and developers in the
quest to govern AI across jurisdictions. He expressed concern with the rise of national security discourses
that may exclude AI development from public scrutiny. Vitoratos stressed that stakeholders across
jurisdictions hold a common task of forging new mechanisms and institutional solutions to safeguard the rule of
law and steer AI development and deployment toward the public interest.

REMARKS | Dr. Ellen M. Granberg


Ellen M. Granberg | President, George Washington University

The President of the George Washington (GW) University, Dr. Granberg stressed the importance of cross-
stakeholder solutions for AI governance, and the critical role conversations such as The Athens Roundtable
have in informing both policy and academic priorities in this field. As powerful agents of change, the role of
academics in such conversations is to break down disciplinary silos to protect and enhance human
experience and fundamental rights in the age of AI.

REMARKS | Dr. Pamela Norris


Dr. Pamela Norris | Vice Provost for Research, George Washington University

Vice Provost Pamela Norris emphasized academia's pivotal role in establishing guardrails and good
governance for AI systems, particularly for future generations. Dr. Norris stressed academics’ unique role in
advising policymakers. She called for rigorous, evidence-based research to inform policy and foster trustworthy
AI alongside democratic values. Dr. Norris also underscored the need for training the next generation of AI
professionals to develop AI that is safe and trustworthy, with the potential to positively transform
communities.

... Stefanos Vitoratos stressed that the


preservation of our fundamental values should be
prioritized and reflected in the core of AI
development endeavors.

| 51 |
CONCLUSION

Conclusion
The fifth edition of The Athens Roundtable shed light technological progress does not come at the cost of
on the urgency of adopting a multifaceted approach societal well-being and democratic values.
to AI governance—one that encompasses
comprehensive regulations, precise definitions and Moving forward, The Future Society's role in
metrics, and robust enforcement mechanisms. facilitating dialogues and spearheading
collaborations for institutional innovation becomes
The recommendations emerging from the dialogue more crucial than ever. The insights and policy
point towards a future where AI development is not recommendations from the Athens Roundtable
only governed by the principles of safety and provide a roadmap for action, but they also serve as
responsibility, but also steered by a harmonized a reminder of the challenges ahead. The goal is
legal framework that transcends borders and clear: guide AI development in a manner that
sectors. To achieve this, a collaborative effort is upholds fundamental rights and the rule of law.
required, bringing together policymakers, Achieving this will require continued commitment,
developers, civil society, and impacted communities. creativity, and cooperation from all stakeholders
Beyond coordination, we must develop liability involved. We look forward to collaborating with
frameworks and governance regimes for general- Roundtable partners, participants, and readers of this
purpose foundation models that are adaptive and report in furthering our mission of aligning artificial
agile. These steps are critical in fortifying our intelligence through better governance in the years
democratic institutions to be resilient to the ahead.
disruptive potential of AI—ensuring that

| 52 |
Contact Us!
GENERAL | [email protected]
PRESS | [email protected]

THE FUTURE SOCIETY


867 Boylston Street, 5th Floor,
Boston MA 02116,
United States

www.thefuturesociety.org

You might also like