Towards Effective Governance of Foundation Models and Generative AI
Towards Effective Governance of Foundation Models and Generative AI
Towards Effective Governance of Foundation Models and Generative AI
Towards Effective
Governance of
Foundation Models
and Generative AI
TAKEAWAYS FROM THE FIFTH EDITION
OF THE ATHENS ROUNDTABLE ON
AI AND THE RULE OF LAW
Towards Effective Governance of Foundation
Models and Generative AI: Takeaways from the fifth
edition of The Athens Roundtable on AI and the Rule
of Law
CONTACT: [email protected]
CITE AS: Amanda Leal. “Towards Effective Governance of Foundation Models and Generative AI:
Takeaways from the fifth edition of The Athens Roundtable on AI and the Rule of Law” (The Future
Society, March 2024).
© 2024 by The Future Society. Photography by Emanuel K Miranda. Design by Vilim Pavlović.
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.
| II |
stnetnoC
Executive Summary 1
| III |
Remarks by Co-hosts 50
stnetnoC
Remarks by Ambassador Ekaterini Nassika 50
Remarks by Stefanos Vitoratos 51
Remarks by Dr. Ellen M. Granberg 51
Remarks by Dr. Pamela Norris 51
Conclusion 52
| IV |
EXECUTIVE SUMMARY
Executive Summary
The barriers to a rights-based approach to AI the spotlight in intergovernmental forums such as
governance anchored in the rule of law have never the G7 and G20.
been more tangible. Increasing AI capabilities,
geopolitical tension, and market-driven interests cast Five years later, 2023 marked a year in which AI
doubt on our ability to collectively uphold the public governance climbed the agenda of policymakers
interest in the development and governance of AI and decision-makers worldwide. The release of
systems. technologies with increasingly general capabilities
has generated hype and accelerated a
The Athens Roundtable on AI and the Rule of Law is concentration of power in big tech, triggering a
the premier civil society-led multistakeholder forum societal-scale wake-up call. The growing threats to
on AI governance. When the forum was inaugurated democratic processes and human rights
in 2019, AI governance frameworks were incipient. presented by generative AI systems have
The first national strategies were just being prompted calls for regulation. We must
developed, international organizations were collectively demand rigorous standards of safety,
kickstarting ethical guidelines and seeking security, transparency, and oversight to ensure
stakeholder consensus, and AI policies were far from that these systems are developed and deployed
responsibly.
| 1 |
EXECUTIVE SUMMARY
1,150+ 100+ 65
ATTENDEES COUNTRIES SPEAKERS
REPRESENTED
In-person
registrants by Civil society
sector 24%
Private sector
22%
Government
Academia
Intergovernmental
8% organization
21%
24%
In-person
registrants by 49% Man
gender
Woman
Non-binary
1%
3%
47%
| 2 |
EXECUTIVE SUMMARY
Forward-looking takeaways
The Athens Roundtable informed the public of the initiative stemming from the UN's High-level Advisory
latest developments in AI legislation, regulation, Body on AI, and the evidence-based work of the
standards, and soft governance mechanisms to set OECD's AI Policy Observatory.
appropriate safeguards around foundation models
and generative AI. In this context, discussions With speakers and participants from over 100
spanned a broad range of themes, including security countries, the ideas and arguments presented
vulnerabilities of frontier AI models, policy reflected the viewpoints from a broad range of
considerations for open-source AI systems, cultural, political, and socioeconomic backgrounds.
geopolitical developments, risks of regulatory Moving forward, The Athens Roundtable
capture by industry, threats to information maintains one key commitment: To reexamine our
ecosystems, and strategies to mitigate the impact of current practices and assumptions, welcoming
AI on democratic processes. input and feedback from broad audiences, with
particular attention paid to engaging
Discussions probed into national efforts to advance underrepresented communities.
binding regulation, such as U.S. federal legislative
efforts, the next steps for federal agencies based on Below, we present key recommendations that
the U.S. Executive Order 14110 on Safe, Secure, and emerged from discussions. These recommendations
Trustworthy Development and Use of Artificial reflect The Athens Roundtable’s mission of
Intelligence, the European Union’s AI Act, and advancing responsible AI governance through a
China's generative AI regulation. Dialogues also harmonized framework encompassing legal
covered intergovernmental efforts, including the compliance and enforcement across jurisdictions.
impact of the G7 Hiroshima AI Process on corporate The report that follows presents a session-by-
governance, the reach of UNESCO's AI Ethics session summary for a more detailed context of
Recommendation (and subsequent implementation discussions.
efforts), the potential of a global AI governance
| 3 |
EXECUTIVE SUMMARY
| 4 |
EXECUTIVE SUMMARY
| 5 |
EXECUTIVE SUMMARY
Looking ahead, The Future Society remains committed to facilitating dialogues and collaborations. We aim to
develop institutional innovations that ensure that the trajectory of AI development aligns with fundamental
rights and the rule of law for the benefit of all.
| 6 |
REMARKS BY THE FUTURE SOCIETY
| 7 |
GLOBAL COORDINATION & CIVIL SOCIETY INCLUSION
Vilas Dhar, President of the Patrick J. McGovern global majority in AI policy, transcending the
Foundation, illuminated the deep-rooted, traditional focus on the US and the EU.
philosophical nature of AI governance discussions,
acknowledging the significant contributions of Dhar proposed three key priorities:
humanists, philosophers, and ethicists over the
decades. He stressed the importance of balancing 1. Drive public investment to bridge the digital
human dignity, justice, and equity with private sector divide and ensure broadband connectivity for
interests, while also creating avenues for solutions all.
that serve universal human interests. 2. Address the data gap in AI development by
prioritizing the collection and use of diverse data
Dhar underscored the critical need for inclusivity in sets that truly represent global populations,
AI policy-making, pointing out the stark digital especially in areas like health and drug
divide—with 2.6 billion people still offline—and discovery.
the dominance of Global North governments in 3. Leverage policy mechanisms to close the
shaping our collective future. He highlighted the technical capacity gap among the global
ongoing struggle for civil society, particularly those majority. This is paramount to fostering a
representing marginalized communities, to gain a diverse community of socially conscious AI
meaningful voice in the AI conversation. In the practitioners.
context of the U.S., Dhar criticized the
disproportionate focus on big tech narratives and Dhar concluded with optimism, highlighting the
the over-reliance on voluntary self-regulation, which potential for shared values to lead to policy
sidelines civil society’s participation. Dhar also harmonization, ultimately benefiting economic,
emphasized the need to include perspectives of the social, and political outcomes around the world.
| 8 |
GLOBAL COORDINATION & CIVIL SOCIETY INCLUSION
Main Takeaways
ENSURE THE RESILIENCE OF DEMOCRATIC INSTITUTIONS WITH REGULATION
AND CAPACITY-BUILDING:
Given the host of unintended outcomes they may occur through the development and deployment of large
models, it is urgent to strengthen democratic institutions and develop mechanisms to mitigate harms when they
happen. This should be done through increased capacity-building in the public sector and a move towards
binding laws.
| 9 |
GLOBAL COORDINATION & CIVIL SOCIETY INCLUSION
Discussion
This panel addressed two questions that rose to the Adding to the multistakeholder challenge, he noted
core of international AI governance discussions in the looming risk of regulatory capture, given the
2023. First, how can we collectively facilitate an high concentration of power in a few corporations.
international AI governance regime? Second, what Finally, Dr. Gill emphasized that transparency in data
safety mechanisms are needed to address the source disclosure remains an underexplored area in
borderless impact of foundation models? AI governance with global implications.
OECD’s Deputy Secretary-General Ulrik Vestergaard UNESCO’s Assistant Director-General for Social and
Knudsen opened the discussion by classifying the Human Sciences, Gabriela Ramos, underlined the
rise of generative AI as a watershed moment, calling necessity of ex-ante AI assessments and
for updates on international institutions’ work. He adherence to human rights standards. She
detailed the OECD's plans to review its 2019 described UNESCO's collaboration in creating an
principles in light of evolving AI capabilities and AI ethics observatory with civil society organizations
the OECD’s ongoing efforts to collect evidence of AI to ramp up analytical efforts to inform policymaking.
impacts and strengthen its AI experts community. In addition to analytical work, UNESCO has been
Knudsen acknowledged the OECD’s inherent focus actively working with national governments to apply
on only a subset of countries, but called for broader readiness assessments and build public sector
collaboration in AI governance to facilitate the capacity to implement the Recommendation on the
convergence of approaches. Ethics of Artificial Intelligence, adopted by 194
countries. Reflecting on growing concerns about
UN Tech Envoy Amandeep Gill stressed the need generative AI and foundation models, Ramos
for an inclusive network of institutionalized stressed the importance of legal responsibility and
responses to AI, in which different international liability frameworks for AI developers, highlighting
institutions and blocks of countries share knowledge the challenge of implementing these globally.
and coordinate harmonized approaches. He
highlighted the UN Tech Envoy’s mandate, whose Gary Marcus dissected the shift in the public
advisory body is developing a comprehensive risk discourse from “trust” to the “safety” of AI systems,
assessment framework encompassing short- to pointing out how events such as ChatGPT's launch
long-term risks. Dr. Gill is also focusing on contributed to growing safety-oriented concerns. He
democratizing AI opportunities aligned with criticized the rapid commercialization of AI before
sustainable development goals (SDGs). Key robust safety guardrails were in place, exemplified
challenges include updating practices and by various instances of dangerous outputs by AI
responses promptly in a fast-evolving AI landscape, systems, as was the case with Microsoft’s Sydney.
improving multistakeholder participation, and Finally, Dr. Marcus called for work to increase
coordinating responses across industry, civil society, reproducibility in the development of AI systems.
and governments at the national and international
levels.
| 10 |
GLOBAL COORDINATION & CIVIL SOCIETY INCLUSION
Main Takeaways
LAWMAKERS HAVE A KEY ROLE IN INTERNATIONAL LEGAL AND REGULATORY HARMONIZATION:
Lawmakers can advance AI governance coordination in their respective jurisdictions by enacting laws, voting
on and proposing investment priorities, and raising awareness within government, among colleagues, and with
constituents. Lawmakers, as a crucial stakeholder group, must be engaged in discussions and convenings on
AI governance.
DEVELOP LIABILITY FRAMEWORKS AND REGULATIONS THAT ADDRESS THE GLOBAL CHARACTER OF
THE AI SUPPLY CHAIN AND ENABLE REDRESS FROM IMPACTED PEOPLE:
Lawmakers, especially those of advanced economies, must not overlook the impact on the Global South
through companies’ supply chains. Countries should enact liability frameworks that hold actors accountable and
strengthen corporate responsibility beyond their borders.
| 11 |
GLOBAL COORDINATION & CIVIL SOCIETY INCLUSION
Discussion
In an insightful conversation on trustworthy AI, While Global South countries are economically,
Honorable Member of the Tanzanian Parliament politically, and socially affected by the
Neema Lugangira and policy expert Yolanda Botto- development of AI systems, they have often not
Ludovico highlighted the necessity for global been meaningfully represented in international AI
inclusivity in decision-making, knowledge sharing, governance convenings such as the UK AI Safety
and capacity building, as well as robust regulatory Summit. MP Lugangira emphasized the urgency of
frameworks that prevent exploitative dynamics in the transforming African countries from mere consumer
Global South. The discussion underscored the markets to respected and active participants and
importance of equitable AI development, developers in the AI landscape. A positive step
international collaboration, and accountability in toward that direction is the African Union’s ongoing
AI governance, emphasizing that the benefits of AI collaboration with the OECD AI in writing a
should be democratized globally in a secure, safe, continental AI Strategy. This strategy will be key in
and ethical manner. leveraging AI to address critical challenges like food
insecurity on the African continent.
Drawing from her experience in Tanzania and
internationally in the Inter-Parliamentary Union (IPU) Drawing attention to the global majority’s crucial role
and the African Parliamentary Network on Internet in AI development, MP Lugangira highlighted that
Governance, MP Lugangira pointed out the crucial data is the backbone of foundation models. She
need to equip lawmakers with the knowledge and raised concerns about companies exploiting data
tools to contribute to AI governance effectively. She from African nations without compensation and
advocated for increased participation of stressed the importance of enacting laws across
parliamentarians in AI governance discussions jurisdictions allocating liability throughout the
worldwide and underscored the urgent need for global AI supply chain. Countries in the Global
global legislative attention on AI's societal North developing AI regulations should set a
implications. In her role at the Inter-Parliamentary standard of behavior that allows individuals in the
Union, MP Lugangira is co-sponsoring a draft Global South to hold AI companies accountable for
resolution for the IPU’s first general assembly of harms reproduced beyond companies’ headquarters
2024, focusing on the impact of AI on democracy jurisdictions.
and the rule of law. If approved, the resolution can
influence national legislative discussions across the
193 countries represented at the IPU.
Main Takeaways
EMBRACE A COMPLEMENTARY REGULATORY APPROACH:
Countries should learn from each other, with a focus on integrating both vertical approaches, as China has
applied in some sectors, and horizontal approaches, akin to the EU's for comprehensive AI governance.
| 13 |
GLOBAL COORDINATION & CIVIL SOCIETY INCLUSION
Discussion
In a conversation with Professor Yi Zeng, a safety, called the Ditchley Declaration—Prof. Zeng
renowned expert in AI ethics and governance, TFS’s emphasized the necessity of inclusive dialogue in
Samuel Curtis inquired about Prof. Zeng’s views on global coordination efforts. He stressed the
China's unique approach to AI governance and its importance of achieving a unified, global
contributions to the global AI regulatory landscape. understanding of the risks presented by AI
development and that this will require engaging with
During the discussion, Yi Zeng emphasized the nations across the geopolitical spectrum. The
symbiotic nature of various regulatory participation of China in the summit was an example
approaches, contrasting the European Union's of this inclusive approach, ensuring that discussions
broad, horizontal AI Act with some of China's more on AI safety encompass a diversity of cultural and
targeted, vertical regulations focusing on specific AI political viewpoints.
applications, such as recommendation systems and
generative AI. Prof. Zeng highlighted the benefits of In addition to these policy discussions, Prof. Zeng
China's approach, particularly its specificity in underscored the crucial role of academia in shaping
addressing AI challenges. He also suggested that, AI governance. He highlighted the unique
conversely, China could learn from the EU’s broader, contributions of independent academic experts in
horizontal framework. providing balanced, interdisciplinary perspectives on
AI's long-term risks. Prof. Zeng advocated for
In his analysis of international AI safety commitments independent academic expertise to guide AI
—including the UK AI Safety Summit, the Bletchley development beyond national competition,
Declaration, and the joint statement signed by world- focusing on collaborative problem-solving.
leading academics highlighting the importance of AI
| 14 |
GLOBAL COORDINATION & CIVIL SOCIETY INCLUSION
Yoichi Iida joined The Athens Roundtable to generative AI and foundation models; guiding
celebrate the completion of the first phase of the principles for AI actors across the value chain; a set
Hiroshima AI Process, with the guiding principles and of measures and actions targeted at advanced AI
the G7 code of conduct. He shared a reflection on (generative AI and foundation models) developers,
his experience as a representative of Japan and including the code of conduct; and a set of projects
chair of the Hiroshima AI Process Working Group, that will explore potential solutions to respond to
which took place on December 1st, 2023. emerging risks and challenges of those
technologies. The projects aim to tackle foundation
Mr. Iida commented on the Hiroshima AI Process models’ lack of transparency and the spread of AI-
report, which includes a comprehensive framework enabled disinformation.
that promotes safe, secure, and trustworthy
generative AI. Notably, it was developed with the Mr. Iida commended the Hiroshima AI Process
intention of being adopted beyond G7 member Working Group’s strong commitment to collaborating
countries. The framework is comprised of four with a variety of stakeholders across the world and
elements: The OECD report covering the potential other multilateral organizations to operationalize the
risks, challenges, and opportunities brought by framework.
| 15 |
GLOBAL COORDINATION & CIVIL SOCIETY INCLUSION
U.S. Representative Jacobs highlighted the need for definition of “AI safety,” encompassing the entire
globally coordinated AI governance, stressing the spectrum of AI risks. It is particularly important to
need to address AI's impact on the Global South, address bias and ongoing harms incurred by
and advocated for inclusive, multilateral marginalized communities for increased safety.
engagements.
Finally, U.S. Representative Jacobs echoed calls for
U.S. Representative Jacobs underscored the robust oversight and regulation. She encouraged
importance of incorporating diverse voices, leading nations, especially the US, to adopt new
especially Civil Society Organizations (CSOs) and legislation and develop new resources for AI
governments from the Global South, into AI policy oversight. She stressed the strategic role new
discussions. She also urged for a comprehensive institutions like the U.S. AI Safety Institute can play
approach to AI safety, encouraging the AI in fostering adaptive and effective AI governance.
governance community to develop a broad
| 16 |
TRUST & DEMOCRATIC RESILIENCE
Main Takeaways
REGULATE THE FINANCING OF DIGITAL CAMPAIGNS AND THE USE OF MICROTARGETING FOR
ELECTORAL PURPOSES:
The unregulated use of generative AI in electoral campaigns is extremely harmful to democracy. Governments
must ensure that electoral outcomes don’t hinge more on financial resources and technical capabilities than on
democratic discourse and voter engagement.
| 17 |
TRUST & DEMOCRATIC RESILIENCE
Discussion
In 2024, approximately 3.9 billion people—48% of advertisements and party financing. Reflecting on
the world's population—will participate in general the Digital Services Act and the European context,
elections across 54 countries, including five Nemitz highlighted that jurisdictions that adopt
nuclear-armed states. At this critical political platform and electoral advertising regulations with
juncture, policies must address AI’s potential to policies to combat disinformation would be less
amplify disinformation, heighten cybersecurity likely to have elections swayed by digital targeting
threats, and disrupt the information ecosystem. techniques.
Opening the discussion, Dr. Rebekah Tromble Caio Machado criticized the resistance tech
analyzed the potential impact of generative AI on companies often exhibit towards regulation. The
voter behavior and election dynamics. She absence of institutional mechanisms to diagnose
emphasized the crucial role of AI transparency in the problems and understand the impact of
upcoming 2024 U.S. presidential elections. Dr. technologies creates a trust gap, which may lead
Tromble advocated for data and model disclosure to radical and hasty, non-technical solutions or
requirements and emphasized the need to educate simply inaction. He also pointed out the need to
and inform the public about the potential impact of AI rethink information production, usage, validation, and
on voter behavior and election dynamics. dissemination. Machado shared his experience
combating disinformation in Brazil, where Institute
The discussion followed with an analysis of the Vero collaborated with social media creators to
adequacy of the current European regulatory educate young people and judiciary staff on fact-
landscape in addressing the challenges posed by checking and open-source investigation tools.
generative AI and disinformation. Paul Nemitz Regulations and institutional mechanisms should be
defended the vetting of large AI models by tailored to their respective local contexts while
regulators before public release, similar to the pursuing a global goal: strengthening democratic
safety requirements of other industries such as institutions’ resilience to technological disruption.
automotive and pharmaceuticals. Focusing on
elections, Nemitz emphasized the dangers of In her response, Marielza Oliveira from UNESCO's
electoral outcomes’ greater dependence on digital Division for Digital Inclusion, Policies and
advertising and sophisticated targeting technologies Transformation articulated the organization's
than on the quality of political arguments or approach to balancing freedom of expression with
candidates’ trustworthiness. Nemitz stressed the
urgent need to rethink current models of election
| 18 |
TRUST & DEMOCRATIC RESILIENCE
oversight for a democratic information ecosystem, accepted in the economic realm but problematic for
rooted in Article 19 of the International Covenant for electoral integrity.
Civil and Political Rights. She underscored the
threats of the rapid diffusion of harmful AI- Furthermore, society must be equipped with critical
generated content on social media platforms. In thinking skills, media literacy, and access to
the Global South, infrastructure and skills gaps information to properly make sense of the digital
exacerbate disparities in the information ecosystem. Society must demand accountability of
ecosystem. To fight the surge of AI-generated tech corporations, politicians, and governments for
content, Dr. Oliveira emphasized UNESCO's efforts the development and deployment of potentially
in assessing digital ecosystems: Currently, 44 harmful AI systems. Speakers also emphasized the
countries are voluntarily assessing the extent to importance of supporting trustworthy information
which their digital ecosystems are human rights- sources, journalists, and local news in light of AI-
based, open, accessible, and multi-stakeholder-led. generated content.
These assessments help identify and address
systemic shortcomings, such as issues related to Finally, speakers concluded that transparency alone
language barriers, accessibility, and inclusion. is insufficient for preserving democracy.
Governments should adopt positive agendas
Speakers analyzed possible restrictive measures to focused on rebuilding trust, lest environments of
protect electoral integrity. The discussion also systemic distrust undermine the political institutions
touched upon the need to rethink business models themselves.
that significantly impact democracy and the rule
of law. Micro-targeting, for instance, is widely
| 19 |
TRUST & DEMOCRATIC RESILIENCE
Main Takeaways
DEVELOP AND DEPLOY FORMAL TRAINING AND GUIDELINES FOR THE USE OF AI IN THE JUDICIARY:
The panel underscored the need for formal training and guidelines on the judicial use of AI, emphasizing that AI
tools must be leveraged in an informed, ethical, and responsible manner. UNESCO has been leading efforts in
this area, with the MOOC on AI and the Rule of Law co-produced with The Future Society, the Toolkit on AI and
the Rule of Law, and the upcoming guidelines for the use of generative AI in judicial contexts. Acknowledging
the diverse impacts of AI across different regions, the discussion highlighted the need for contextual
adaptations of training and guidelines and equitable access to resources, particularly in the Global South.
| 20 |
TRUST & DEMOCRATIC RESILIENCE
Discussion
This fireside chat brought together experts from Linda Bonyo focused on the context-specific impact
different corners of the world to discuss the of technology, highlighting the disparities in AI tool
burgeoning intersection of generative AI and the performance and adoption across different
judiciary. Moderated by Cédric Wachholz, panelists regions, particularly in Kenya and the broader Global
shed light on AI utilization in judicial settings, the South. She pointed out that when procuring cutting-
associated risks, and the pressing need for edge technologies, such governments often must
guidelines and training in this domain. rely upon products developed in the Global North,
emphasizing the issue of vendor lock-in due to the
Professor Juan David Gutierrez Rodriguez initiated lack of viable local alternatives. Bonyo further
the conversation by sharing findings from UNESCO's advocated for equitable access to computing
global survey on generative AI in the judiciary. This resources in the Global South, helping countries
survey, receiving responses from nearly 100 transcend the roles they might otherwise be
countries, indicated a significant gap between confined to—mere consumption and data labeling.
legal operators’ familiarity with AI and its practical
application in professional legal contexts. While Judge Kimberly Kim provided a cautiously pragmatic
most respondents were acquainted with AI, only a perspective on the use of generative AI in the U.S.
fraction used AI tools, such as large language judiciary. She highlighted the judiciary's resistance to
models, for professional purposes. Concerns that technological changes and the need for tools and
were highlighted included data security, reliability of programs to educate legal operators about
information, and potential violations of privacy and generative AI. Preparing the judiciary for the
copyright. Notably, a vast majority of the operational impacts of AI is urgent. Courts might see
respondents had not received formal AI training, an increase in dockets due to AI-related claims and
indicating a dire need for educational initiatives and lawsuits. She furthermore stressed the urgent need
mandatory rules to govern the use of AI in legal for equitable access to justice: costly AI-assisted
settings. legal tools will confer strategic legal advantages
to those with access to them, which will likely
Miriam Stankovich underscored the challenges contribute to a growing digital divide in the justice
posed by generative AI tools in the judiciary. system.
Drawing from her collaboration with UNESCO on the
recently launched Global Toolkit on AI and the Rule Professor Rodriguez concluded the panel with
of Law, she pointed out the tendency of generative insights into upcoming UNESCO recommendations
AI tools to create plausible but not necessarily on generative AI use in judicial contexts. He
accurate outputs, "hallucinate," and amplify bias. emphasized the need to test AI tools before
Stankovich emphasized the urgent need for deployment, particularly in high-risk environments
enhanced regulation, governance, and, critically, like the justice sector, and to assess their impact on
digital literacy among judges to navigate the human rights.
complexities of AI in judicial proceedings.
| 21 |
TRUST & DEMOCRATIC RESILIENCE
Main Takeaways
DEVELOP STRATEGIES AND ORGANIZATIONAL STRUCTURES THAT ALLOW FOR CIVIL SOCIETY
ORGANIZATIONS TO UNITE IN ADVOCATING FOR PROMISING AI POLICIES:
Civil society is composed of actors with a wide range of priorities, values, and backgrounds, but they share a
common goal of advancing the public interest. Civil society organizations should collectively identify and
advance promising AI policies to counteract growing corporate lobbying efforts.
| 22 |
TRUST & DEMOCRATIC RESILIENCE
Discussion
This panel centered on the evolving landscape of AI into ongoing litigation that illustrates the judiciary’s
governance in 2024, exploring the balance between role in clarifying the application of existing laws to AI
technological innovation and the need for robust technologies, thus shaping the trajectory of AI
regulatory frameworks. The conversation highlighted regulation. In addition, Schildkraut analyzed the
the importance of diverse stakeholder involvement industry's emerging challenges for compliance and
in shaping AI policy and the challenges of adapting risk management. Notably, companies should
existing regulatory mechanisms to the nuanced ensure their risk- and impact-assessment bodies
demands of AI technologies. are comprised of professionals with technical AI
expertise and professionals with different
Keith Sonderling emphasized the significance of backgrounds and subject matter expertise. Finally,
involving new stakeholders, such as auditors and they should be empowered within the company to
AI developers, in the regulatory space, particularly make decisions to discontinue the development or
in the context of employment and civil rights. He production of dangerous AI models.
highlighted the necessity of integrating these new
perspectives to ensure AI technologies are Centering the main challenge for 2024 on AI policy
developed and deployed without discriminating and representation, Tawana Petty stressed that,
against marginalized groups. Government agencies although the US has advanced in developing AI
such as the EEOC have a crucial role to play, governance frameworks such as the Blueprint for an
regardless of legislative developments and new AI Bill of Rights, there are still missing voices in the
regulations, in enforcing existing laws in cases dialogues and decision-making processes
pertaining to the use of AI and its impact on people, pertaining to AI governance. Petty underscored the
as well as applying rigorous oversight to the potential of people and grassroots movements in
deployment of AI in the sectors they are mandated influencing policy and demanding inclusion when
for. The use of natural language processing tools in civil society groups are marginalized in decision-
hiring assessments, for instance, might discriminate making processes. She advocated for the inclusion
against non-native English speakers or people with of diverse voices—particularly those most
speech impairments, which falls under the purview impacted by AI technologies—in regulatory
of the EEOC. discussions. In order to advance responsible AI
policy, civil society organizations should coordinate
Peter Schildkraut discussed the judiciary's vital role, efforts, leveraging intersectionality and elevating
especially in the U.S., in defining rights and marginalized voices rather than seeking a monolithic
addressing AI-related harms. He provided insights approach.
| 23 |
TRUST & DEMOCRATIC RESILIENCE
Cédric Wachholz shared insights from a recent However, cases in which AI systems are the object
survey conducted among UNESCO's network of of litigation remain markedly complex. Wachholz
35,000 judicial operators from over 100 countries, cited a case in Brazil where the use of “smart
focusing on generative AI and its role in the judiciary. billboards” in the São Paulo metro system to predict
The survey revealed a dramatic increase in AI use riders’ emotions and other attributes was
within legal systems worldwide, raising important challenged.
questions about AI's role in enhancing justice
while upholding human rights and democratic Wachholz also mentioned UNESCO's significant
values. role in training judicial operators and the growing
demand for AI training. He referenced the AI and
While there had initially been an international the Rule of Law MOOC, launched by UNESCO in
convergence towards a multidimensional approach partnership with The Future Society and other
to AI governance focused on the protection of organizations, which has educated over 5,900
human rights, recent industry developments judicial operators from 141 countries. Additionally,
suggested market pressures often prioritize profit UNESCO recently introduced a Global Toolkit on AI
over safety and private interests over public ethical and the Rule of Law for the judiciary.
AI governance.
In conclusion, Wachholz called for collaborative
Meanwhile, governments are grappling with efforts to transform discussions into action, urging
fostering innovation-friendly environments while participants to work together to ensure AI supports
establishing clear, effective AI guardrails. In the rather than undermines justice.
judiciary, AI offers potential benefits in decision-
making, access to justice, and crime prevention.
| 24 |
SAFETY & SECURITY
In an inspiring keynote, Professor Yoshua Bengio continue to develop more robust measurement
laid out his perspective on countering safety and and evaluation mechanisms.
security risks of increasingly capable AI systems,
examining how the balance of power is pivotal for Prof. Bengio also emphasized the responsibilities of
the survival of democracies. companies and governments in the AI domain:
Companies should proactively demonstrate the
Prof. Bengio identified two primary technical safety of their AI systems, and governments
challenges confronting AI today: the risks to should develop capacity in AI measurement and
security and the looming threat of losing control evaluation for effective oversight.
over AI systems. He illuminated the difficulties in
training AI systems that are assuredly safe and the On regulatory measures, Prof. Bengio advocated for
ease with which malign actors could exploit open- strategies to mitigate risks associated with AI
source AI systems. Furthermore, he discussed the systems falling into the wrong hands. He suggested
scientific community's debate over AI systems implementing licensing regimes, reporting
potentially developing self-serving objectives, requirements, and auditing for the most powerful
deviating from human interests. Prof. Bengio pointed AI systems. He stressed the need for greater
out the current scientific limitations in ensuring that scrutiny before highly capable models are released
AI systems align with human intentions and interests, open source. Vulnerabilities in open-source models
which is evident in existing biases and discrimination can't be retroactively fixed throughout the value
in AI systems. chain once models have been downloaded. Prof.
Bengio underscored that decisions on releasing
Evaluating President’s Biden Executive Order on the these models should involve a democratic
Safe, Secure, and Trustworthy Development and evaluation process due to the significant global risks
Use of Artificial Intelligence, Prof. Bengio involved.
commended it as a pivotal step towards bolstering
AI governance. The order's approach to measuring Finally, Prof. Bengio called for
institutional
and evaluating AI systems based on their innovation in democratic processes to control AI
computational resources used in training was development. He proposed the formation of multi-
highlighted as a key development. He commented stakeholder governance bodies, comprising civil
that presently, compute utilization is a reasonable society, academics, and media, to oversee AI
proxy for models’ capability. The more capability, the development and ensure societal alignment.
more potential to create harm. However, we must
| 25 |
SAFETY & SECURITY
U.S. Representative Anna Eshoo, in her keynote, competitiveness, and improve society in
addressed the duality of AI as a source of both numerous ways. To this end, she deemed it critical
groundbreaking advancements and potential perils. that the U.S. Congress passes the CREATE AI Act, a
She emphasized the need for AI development to be bipartisan and bicameral piece of legislation that
safe, trustworthy, and responsible, highlighting the would establish the national AI research resource, a
importance of these qualities in the context of rapid shared cyber research infrastructure.
technological progress.
The convergence of biosecurity and AI is another
Representative Eshoo outlined three fundamental area of concern demanding regulation, stressed
requirements for AI research and development: Representative Eshoo. President Biden's Executive
access to good data, sufficient computing power, Order on the Safe, Secure, and Trustworthy
and skilled people. She argued against the Development and Use of Artificial Intelligence
monopolization of AI development by large directed agencies to conduct a study on how AI can
technology companies, advocating for a more increase biosecurity risks, aligned with concerns
inclusive approach. She emphasized that startups, brought to Congress by Representative Eshoo, such
small businesses, academia, medical and non-profit as the need for AI and biosecurity risk assessment.
communities, and the public sector should all have
access to essential AI resources. Finally, Representative Eshoo underscored national
security as Congress’s top priority. As AI doesn't
Representative Eshoo stressed that democratizing recognize any national boundaries, it is also
AI research and development would enable imperative to work on international coordination
researchers and innovators across the United to advance AI governance that reflects fundamental
States to develop AI tools that bolster our national values, protects our democracy, and respects the
security, advance safety and economic rule of law.
| 26 |
SAFETY & SECURITY
Main Takeaways
IMPLEMENT SAFETY TESTS AND EVALUATIONS THAT START AT THE DESIGN PHASE:
A consensus emerged around the urgent need for robust safety tests and evaluations for AI systems from an
early stage. Speakers stressed the importance of mitigating risks through thoughtful design and regulation and
the need for third-party model assessments.
| 27 |
SAFETY & SECURITY
Discussion
Policymakers across jurisdictions have been asking laboratory level. Notably, he called for platform
themselves which tools they should employ to moderation—the detection and labeling of
evaluate safety and security in AI systems. This generative AI content, to foster public awareness of
session brought to light the varied and complex content authenticity and AI systems’ capabilities. He
aspects of managing the safety and security of suggested tech companies should be proactive in
foundation models, stressing the need for implementing those measures, with a consortium of
collaborative and multifaceted approaches involving major players in the information ecosystem.
regulation, policy frameworks, technical solutions,
and stakeholder engagement. Dr. Joslyn Barnhart echoed Prof. Goldstein’s remark
on the unavoidable nature of adversarial attacks in
Drawing from his technical expertise, Prof. Goldstein AI. She pointed out the need for society-wide
reminded the audience that the ability to jailbreak AI consensus in balancing the benefits and risks of AI
systems does not inherently signal a lack of security. technologies. As consensus emerges around the
Although all systems are vulnerable to breaches, two most dangerous risks, we must identify models likely
dimensions of risk prevention must be to cross those red lines through a set of specific
operationalized across the industry: precautionary criteria and apply rigorous scrutiny through
measures applied in the design and development sociotechnical assessments and safety tests for
stages, and a contextual approach toward risk foundation models.
assessment at the application level, to cover risks
related to deployment. To increase safety and Analyzing concrete measures to establish red lines,
security measures in the design phase of AI systems, Dr. Barnhart highlighted the challenges licensing
Prof. Goldstein suggested learning from similar risk may pose to new market entrants and stressed the
management strategies applied to other, more centrality of model evaluation in AI policy. She
traditional, softwares. In doing so, policymakers underscored the need for academics and civil
should bear in mind that foundation models, if society to contribute to inclusive third-party
compromised, could have extensive negative assessments, especially of foundation models.
impact, due to their widespread use in applications Finally, Dr. Barnhart acknowledged the increasing
across domains. role of governments in AI governance, motivated
by public demand and industry's need for legal
Prof. Goldstein outlined strategies to mitigate harms clarity. She stressed governments’ responsibility to
associated with AI, beyond technical solutions at the invest in safety and public education.
Irene Solaiman focused on the challenges of misuse industry-led AI narratives to governmental initiatives
and unintentional misuse in AI models. She drew a in defining AI governance. Čorba expressed
critical distinction between models accessible concerns over regulatory capture in certain
through APIs and models with open weights, noting jurisdictions if big tech companies are allowed to
the distinct risks each type presents. Solaiman set standards for foundation AI models. Analyzing
underscored the difficulties in establishing clear risk the best approach to governing the safety and
thresholds for these models and called for extensive security risks of foundation models, he drew a
work to build robust policies spanning the gradient parallel with the crypto sector, suggesting that
of model release methods, from proprietary to emerging technologies should be integrated into
open-source. Meanwhile, we should also implement existing regulatory frameworks, rather than
specific policies to govern the use of generative AI completely revamping them.
to preserve academic integrity.
Čorba also discussed the role of voluntary
She pointed out the current inadequacies in commitments in AI governance. Although valuable in
evaluation techniques and the lack of consensus on initiating discussions, they are undoubtedly
definitions of “safety” within the AI community. insufficient for holistic and effective governance. He
Advocating for a collaborative approach to tackling highlighted the varying approaches to AI
unknown risks, Solaiman called for more governance across different jurisdictions and the
comprehensive criteria for risk assessment of large importance of considering both technological and
models, encompassing sociotechnical aspects. societal factors. Čorba called for a shared
Relying on computational power as a risk threshold, commitment across jurisdictions to adopt a proactive
though useful and important as a first step, will be stance to AI governance: rather than focusing solely
insufficient in the long run. on technology, policies should also influence
societal behaviors and values to steer AI
Juraj Čorba provided insights into the evolving AI development towards the common good.
policy landscape, highlighting the shift from
In 2023, Mistral AI, a French AI startup, released an open-source language model that will
provide detailed instructions for suicide, killing one's spouse and acquiring class-A drugs.
Which of the following responsibilities should apply to developers?
| 29 |
SAFETY & SECURITY
Main Takeaways
DEPLOY PERIODIC IMPACT ASSESSMENTS AND OVERSIGHT PROCESSES TOWARD THE USE OF AI BY
LAW ENFORCEMENT:
AI technologies can be leveraged to improve the performance of law enforcement in fulfilling their statutory
obligations. Considering the level of uncertainty and lack of regulation of AI technologies presently,
establishing oversight mechanisms with direct participation of civil society is crucial for fostering trust in public
institutions and increasing democratic resilience.
DRIVE TALENT AND INVESTMENT FOR LAW ENFORCEMENT TO FIGHT THE USE OF AI IN CRIMINAL
CYBER ACTIVITIES:
Law enforcement agencies require cutting-edge technical tools and expertise to develop efficient strategies to
curb the rapidly expanding use of AI in criminal activities. Governments must direct funding and talent to those
efforts, while also ensuring strict oversight of agencies’ use of AI.
| 30 |
SAFETY & SECURITY
Discussion
National security and law enforcement institutions private data and securing reliable networks in global
around the world encounter a double-edged sword operations, companies must align their operations
in responding to AI’s impact: while it enables new, with diverse regulatory regimes.
large-scale threats to countries and their
populations, it also presents state forces with A unified global response to AI challenges would
sophisticated technological tools that could facilitate alleviate the burden of cross-border operations,
the fulfillment of their mandate. This fireside chat, benefit companies, and improve security across the
moderated by The Future Society’s founder, Nicolas value chain.
Miailhe, featured insightful remarks from the
Department of Homeland Security’s Under Under Secretary Silvers also discussed the Biden
Secretary, Robert Silvers, on the evolving role of AI administration's efforts to harness AI for public
in cybersecurity and governance. safety, including detecting illegal substances and
products made with forced labor through AI-
Under Secretary Silvers highlighted the increasing enabled supply chain mapping. He emphasized
use of AI to automate cyber attacks, with DHS’s commitment to responsible AI use, ensuring
sophisticated techniques that make it more dificult privacy, bias mitigation, and civil rights are central to
for law enforcement to detect scams. Conversely, AI algorithmic decision-making.
is also a powerful tool for cyber defense, offering
innovative ways to protect against these advanced Finally, Under Secretary Silvers emphasized the
threats. This dual role underscores an emerging need to transform voluntary industry
arms race in the cyber domain, where both attackers commitments into codified regulations, policies, or
and defenders leverage AI capabilities. treaties. He highlighted the formation of DHS’s
Artificial Intelligence Safety and Security Board, a
Addressing the border-agnostic nature of digital blend of federal leads, industry experts, and
security challenges, Under Secretary Silvers academics tasked with developing best practices for
stressed the importance of international AI safety and security.
collaboration in AI governance and regulatory
harmonization. To ensure consistency in protecting
| 31 |
SAFETY & SECURITY
The practice of “open-sourcing” technologies has The speakers presented short remarks
been a subject of both admiration and criticism. On contextualizing the state of AI research and practices
the one hand, it has allowed for “democratization” and the role of open-source in the AI ecosystem.
and inclusivity in technological developments, and These remarks were then followed by group
for software robustness through community-driven discussions and a debriefing session.
inspection and audits, red-teaming, and bug
detection. On the other hand, it allows for these Several speakers challenged the notion of a
technologies—harboring unknown and potentially binary between “open” and “closed” models,
hazardous capabilities—to be more readily misused. pointing toward a spectrum of options regarding
As AI systems become more capable, the potential the level of access to system components such as
for their misuse and harm, such as risks to datasets, code, model cards, and model weights.
cybersecurity and biosecurity, grows Given their widespread use and potential for both
correspondingly. benefit and harm, the release strategies of recently
developed large language models were compared.
This interactive roundtable dialogue brought Biological design tools, which offer groundbreaking
together over 100 AI policy experts to brainstorm medical solutions but also present biosecurity risks,
actionable recommendations for adapting release were also discussed as a use case of interest.
strategies for powerful AI systems.
| 32 |
SAFETY & SECURITY
Discussions probed into the jurisdictional challenges capabilities and generality of AI systems. Licensing
of governing the release of models. Participants emerged as a key mechanism, with some
acknowledged that wide sharing of model weights discussants proposing a centralized authority or a
can make it difficult, if not impossible, to trace and consortium for overseeing a model testing
attribute instances of misuse, and thereby seek process prior to open-source release. Some
redress in such cases. Some pointed out that discussants also stressed that the global majority
transparency does not necessarily have to mean should be appropriately represented in such
granting full access to the model, but stressed that governance processes. The idea of an international
closed models must also be expected to adhere to mechanism, possibly akin to a CERN for AI, was
rigorous transparency requirements, including proposed, focusing on beneficial applications and
assessments by third parties. Some discussants establishing a new social contract with internationally
saw promise in a risk-based approach, combining accountable governance.
national mechanisms such as licenses and global
UN-sanctioned certification, to regulate the Suggested elements toward more robust
deployment of closed models with potential for governance of open-source AI included external
tangible harmful outcomes. expert-led red-teaming, government-funded audits,
and incident reporting.
Discussions underscored the importance of
considering a liability framework based on the
| 33 |
SAFETY & SECURITY
Audrey Plonk provided remarks focused on recent Regardless of the form an international institution
developments in AI safety and the OECD’s may take, international norms remain crucial to
dedication to international coordination in AI promote AI safety, robustness, trustworthiness,
governance. In the past few months, the organization and human rights. While the OECD AI Principles lay
took part in key forums, such as the G7 ministerial a foundational framework, she acknowledged the
meeting on the Hiroshima AI process, the UK AI need for additional measures as AI technologies
Safety Summit, and its own multistakeholder network proliferate. Plonk stressed that the OECD is
of AI experts. developing responsible business conduct guidelines
for AI, aiming for flexible yet enforceable
Plonk observed that AI safety has transitioned from mechanisms to guide AI companies operating
a specialized technical concern to a top priority for internationally and address AI-related disputes
governments worldwide. This shift has sparked through mediation.
debates on the necessity of an international
governance regime for advanced foundation Additionally, Plonk highlighted the launch of the
models. In this sense, comparisons with institutions OECD AI Incidents Monitor, a tool to monitor global
like CERN, the IAEA, and the IPCC have been news in real time to detect and classify AI-related
increasingly drawn. incidents, offering a vital resource for international
risk management and data-driven policymaking
| 34 |
MEASUREMENT & STANDARDS
Dr. Gianchandani presented the National Science leadership in the pilot implementation of NAIRR
Foundation’s (NSF) role in driving AI innovation in the (National AI Research Resource) to expedite
US. He highlighted how the NSF is advancing its resource accessibility for the research community to
mission with the new directorate for technology, address those societal challenges. In addition, he
innovation, and partnerships, aimed at equipping outlined the NSF’s efforts in funding foundational AI
researchers, startups, and entrepreneurs with research and its dedication to addressing current
resources to translate ideas into societal benefits. and future risks.
Dr. Gianchandani noted that accelerating research is Collaborative and interdisciplinary partnerships are
key to leveraging AI’s transformative potential at the heart of NSF’s approach to AI governance.
responsibly. AI models’ escalating capabilities The Foundation has collaborated with NIST in
have the potential to accelerate scientific establishing the Institute for Trustworthy AI in Law
discoveries, provide solutions to societal and Society (TRAILS)—a co-host of The Athens
challenges, and reshape how we interact with Roundtable—and created the National AI Research
technology. Dr. Gianchandani stressed NSF's Institutes program
| 35 |
MEASUREMENT & STANDARDS
Main Takeaways
BROADEN THE SCOPE OF AI EVALUATIONS TO INCLUDE SOCIETAL ROBUSTNESS AS A KEY METRIC:
Governments must foster interdisciplinary approaches focused on the safety and societal implications of AI
systems. Ensuring that AI systems are developed and deployed with a comprehensive understanding of their
wider impacts will only be possible with a broader pool of stakeholders and impacted communities participating
in standard-setting. It’s crucial that this work be developed in coordination with various AI safety institutes
globally to share and implement best practices.
| 36 |
MEASUREMENT & STANDARDS
Discussion
Definitions, metrics, benchmarks, and evaluations system evaluations primarily focus on technical
play a crucial role in the governance of advanced AI robustness—a relatively urgent priority. She stressed
systems. In this session, AI experts delved into that methods should assess AI systems in their
established and emergent challenges in real-world contexts in a scientifically accurate and
classification, measurement, and evaluation, reproducible manner, acknowledging the
proposing concrete measures to achieve complexity of today's technology.
scientifically credible and robust tools and processes
for AI governance. Emmanuel Kahembwe added to this discussion by
emphasizing the limitations of the current training of
Sebastian Hallensleben opened the discussion by technical AI experts. Such professionals often
exploring the evolving nature of AI terminology, receive training focused on a narrow set of systems
highlighting the lack of agreed-upon definitions for (often limited to those that they develop and deploy),
terms like "foundation models" and "generative AI." with an emphasis on technical performance metrics.
He emphasized the importance of differentiating He further noted that governments should facilitate
between raw models like GPT-4 and more coordination between AI Safety Institutes to share
application-oriented systems like ChatGPT, noting and implement best practices.
how these distinctions influence AI governance.
He called for a common understanding of concepts Jared Mueller addressed the scrutiny required to
like trust, truth, and facts, especially in the context of ensure the safety of large AI models, acknowledging
generative AI's impact on consumer applications and that while computational resource utilization
societal challenges. (floating-point operations per second, or “FLOPS”)
may not be a perfect measure, it currently serves as
Elham Tabassi emphasized the evolution of NIST's the best available standard. He also highlighted the
approach to AI measurement and evaluation, risk of regulatory capture and the need for diverse
particularly following the comprehensive Executive expertise in evaluating large models. Mueller
Order 14110 from October 2023. She highlighted underscored the importance of including a broad
NIST's role in developing guidelines for evaluating range of specialists—from civil society to
potentially harmful AI systems, including red- government experts—beyond governance and
teaming strategies, and in creating test policy professionals, to comprehensively cover the
environments in collaboration with other expanding risk profiles in the field of AI.
agencies. Tabassi pointed out that current AI
| 37 |
MEASUREMENT & STANDARDS
Delving into the intricacies of consensus-building developments, often lack the resources to engage in
among diverse stakeholders, speakers highlighted voluntary standard-setting processes. To address
the need to broaden the range of expertise and this gap, Kahembwe proposed reevaluating the
backgrounds involved in standard-setting, voluntary aspect of standards-setting activities
recommending the inclusion of communities and allocating resources to remunerate
impacted by AI technologies. Integrating diverse participants. Furthermore, speakers emphasized the
insights from the onset would contribute to more importance of effectively translating consensus into
holistic and impactful AI standards. Drawing from his clear, technically useful documentation. This
role at CEN-CENELEC Joint Technical Committee 21 approach ensures that AI ethics standards are not
on AI (JTC21), Dr. Hallensleben stressed that these only comprehensive and representative but also
committees bear the responsibility of actively practically useful for programmers and engineers.
reaching out to ensure diverse participation. While it
is challenging to achieve consensus with a large and Finally, speakers analyzed the role of standards in
diverse pool of stakeholders, a diversity of shaping not only industry practices but also legal
perspectives tends to enhance the quality and outcomes in cases involving AI technologies. As
applicability of the standards. In this sense, standards gain strength and legitimacy, they could
standards committees based in the Global North increasingly play a pivotal role in judicial cases and
should not overlook the need for representation arbitration. Legal practitioners and judges might be
from the Global South if they aim to have more inclined to rely on these standards in their
international applicability. rulings and to consider expert witnesses familiar with
these benchmarks. This potential judicial reliance on
Looking at practical challenges, speakers identified standards underscores the need for them to be well-
the voluntary nature of participation as a critical established, legitimate, and reflective of broad
barrier to inclusivity in standards-setting. expert consensus.
Stakeholders, particularly those most impacted by AI
| 38 |
MEASUREMENT & STANDARDS
35%
Building consensus
among stakeholders
| 39 |
MEASUREMENT & STANDARDS
John C. Havens provided remarks on leveraging AI Indigenous Peoples’ Data. Furthermore, IEEE
for long-term human and planetary well-being. He developed the Planet Positive 2030 program
noted that inclusion, sustainability, and equal reflecting a commitment to regenerative
opportunity are at the core of long-term human sustainability, which seeks to foster a net positive
flourishing. This perspective is notably reflected in impact on the planet.
the UN Sustainable Development Goals (SDGs) and
the OECD Better Life Index. Drawing from those Highlighting the urgent need to protect younger
initiatives, Havens underscored that, in the age of AI, generations and their future, Havens advocated for
it’s crucial to extend our developmental new standards in age-appropriate design and
understanding beyond traditional economic metrics sustainability, emphasizing the inclusion of
such as GDP. children and future generations in technology
innovation. Havens concluded by challenging the
Havens highlighted IEEE’s role in steering AI predominance of Western rationality in AI
governance in that direction, referencing the IEEE governance, and advocating for values like
7010 standard for well-being impact assessment of relationality and community care to guide AI
AI systems (2020) and the pioneering work on development
Recommended Practice for the Provenance of
| 40 |
MEASUREMENT & STANDARDS
Margot Skarpeteig reflected on the 75th anniversary She highlighted the efforts of the Human Rights Trust
of the Universal Declaration of Human Rights against Fund, which supports World Bank staff in
the backdrop of significant technological understanding the intersection of human rights and
advancements, particularly in AI. She emphasized development in their operations and analytics.
the challenges posed by these advancements,
noting how the potential of AI as a force for Furthermore, Skarpeteig discussed The World
positive change is currently overshadowed by Bank's initiative to develop a comprehensive
threats to human dignity and agency. framework for identifying and mitigating the human
rights risks associated with AI in their institution’s
Skarpeteig underscored The World Bank's operations.
awareness of its crucial role in upholding human
rights within the global digital marketplace.
| 41 |
REGULATION & ENFORCEMENT
Reflecting on the burgeoning influence of AI in 2023, AI Act. This framework proposes establishing a
Senator Blumenthal underscored the significant licensing regime for entities engaged in high-risk
impact AI has on the economy, safety, and AI development and creating an independent
democracy. He cautioned against Congress oversight body with AI expertise. It lays out specific
repeating past errors seen in the technological principles for upcoming legislation aimed at
revolutions of the previous decade, particularly protecting national and economic security, enforcing
referencing the challenges faced with the rapid transparency about AI model limitations and uses,
growth of social media. Senator Blumenthal noted protecting consumers and children, and
Congress's failure to act in the past, which led to the implementing rules like watermarking, disclosure of
rise of monopolistic companies wielding AI usage, and data access for researchers.
disproportionate power. Furthermore, the framework addresses the
accountability of AI companies, holding them
Drawing from his experience as chair of the Judiciary liable for privacy breaches, civil rights violations,
Subcommittee on Privacy, Technology, and the Law or other harms.
in 2023, he shared insights from witness testimony
by industry leaders—including OpenAI CEO Sam Reflecting on international developments, Senator
Altman, Anthropic CEO Dario Amodei, and Microsoft Blumenthal highlighted the significance of the EU's
President and Vice Chair Brad Smith—who, unlike AI Act, lauding it as a groundbreaking effort that sets
social media executives in the past, expressed a baseline rules and standards for AI, and providing a
unanimous call for AI regulation. valuable model for AI regulation akin to the EU's
initiatives in privacy, competition, and online safety.
In August, Senator Blumenthal and Senator Hawley
announced a bipartisan framework for a U.S.
| 42 |
REGULATION & ENFORCEMENT
Senator Brian Schatz's keynote addressed the cautioned against oversimplified statutory
critical issue of regulating dual-use foundation frameworks that could be manipulated by tech
models at the federal level in the United States. corporations. Senator Schatz stressed the
He emphasized the vital role of the federal importance of a nuanced and adaptable regulatory
government in this endeavor while acknowledging framework capable of addressing the multifaceted
the current lack of a unified approach. challenges posed by AI technologies.
Senator Schatz critiqued traditional regulatory Focused on the immediate steps necessary in AI
methods, which typically either address harms on a regulation, Senator Schatz proposed requiring clear
case-by-case basis or establish an extensive list of disclosure when online content is machine-
statutory provisions, which would be ineffective for generated, which would enhance transparency and
the rapidly evolving field of AI. He stressed the need accountability in the digital realm. Furthermore, he
for the US to develop basic, common-sense, emphasized the urgent need for regulations
future-proof principles that encourage developers concerning the use of data in training AI models,
and deployers to innovate responsibly. advocating for a duty of care from data collectors
towards individuals whose data is being utilized.
Stressing the crucial role of enforcement, Senator
Schatz highlighted the role of federal agencies, but
| 43 |
REGULATION & ENFORCEMENT
Legislative guardrails are essential not only to and proprietary work. She stressed the importance
safeguard consumers and intellectual property but of protecting local news organizations, for instance,
also to preserve the very foundations of democracy. from undue reproduction and use of training data
Senator Amy Klobuchar’s keynote underscored the without compensation by large platforms. The
urgent need for legislative guardrails for generative Journalism Competition and Preservation Act, as
AI and the importance of international coordination. she mentioned, aims to empower local news outlets
to negotiate fair compensation for their content—an
Increasingly sophisticated AI-generated content can issue closely related to information integrity and trust
spread misinformation related to elections, such as in information ecosystems and democratic
inaccurate information about voting logistics, posing institutions.
concrete risks to democratic processes and the
upcoming election in the United States. Senator Taking the discussion back to power dynamics and
Klobuchar highlighted key bipartisan efforts to democratic control over AI, Senator Klobuchar
combat the growing threat of deepfakes in U.S. emphasized the need to modernize U.S.
electoral processes. She discussed the need to competition laws to address the unique
confront deceptive practices while ensuring free challenges posed by the concentration of power
speech—an approach encapsulated in the in the AI landscape and called for legislation to
Deceptive AI Act, aimed at curbing the use of ensure transparency and accountability,
fraudulent content in political advertising. particularly for high-risk AI applications. Finally,
recognizing that AI’s challenges transcend national
Highlighting another critical issue with regulating borders, Senator Klobuchar advocated for global
generative AI and protecting the information cooperation in developing and harmonizing AI
ecosystem, Senator Klobuchar advocated for the governance frameworks to effectively address
protection of individuals and content creators these universal challenges.
against the unauthorized use of their voice, likeness,
| 44 |
REGULATION & ENFORCEMENT
Main Takeaways
BALANCE HORIZONTAL FRAMEWORKS WITH SECTOR-SPECIFIC REGULATION:
Stakeholders must advance discussions around the legal considerations in allocating liability along the AI value
chain to develop robust and legally sound doctrine and policy. Emerging AI liability regimes should consider
existing regulatory frameworks and, when appropriate, complement them, such as with contractual, legal, and
regulatory liability in different sectors.
| 45 |
REGULATION & ENFORCEMENT
Discussion
Liability, often defined in contracts between value Moderator Anna Gressel touched upon the evolving
chain actors, is increasingly being considered at the nature of liability in the AI sector. Beyond regulation
regulatory level as the consequences of AI systems of dual-use foundation models, product liability
grow more severe. This fireside chat delved into the should also be considered in the US, following
complexities of regulating AI across its value chain. Europe's lead on the matter. Given the current
scenario of divergent regulatory proposals and self-
Acknowledging that regulation will be crucial for governance approaches, panelists underscored the
both risk mitigation and industry innovation, this importance of developing and implementing
discussion highlighted the significant challenge of standards, as set by organizations like ISO and IEEE,
allocating responsibility within the AI value chain. to harmonize approaches and reduce compliance
Discussions spanned governance approaches for costs across jurisdictions.
accountability and liability, how supply chain actors
are reacting to the regulatory trends, and the Focusing on corporate responsibility, Cameron Kerry
balancing act of advancing responsible AI across highlighted the need for companies to invest in
jurisdictions. transparency, incident reporting, and thorough
auditing processes regardless of binding
Speakers emphasized the urgency of aligning risk regulation. Kerry drew an analogy to the need for
assessment and compliance with regulatory careful planning and diligent, repeated
trends, as the cost of non-compliance increases measurement in the practice of carpentry,
with every major jurisdiction that enacts AI emphasizing the need for diligence and precision in
regulations. A key point of discussion was the AI regulation and deployment.
regulation of AI developers and deployers and the
varying levels of risks associated with different sizes Addie Cooke pointed out the increasingly relevant
of AI models, particularly in the context of generative role of model monitoring tools in the industry,
AI applications. The conversation touched upon how which can help raise the technical bar for risk
the EU AI Act might influence regulatory approaches assessment. She noted that evaluations should be
to foundation models in other jurisdictions. done at different stages of the value chain and
praised NIST’s Risk Management Framework for its
adaptability and usefulness for the industry.
| 46 |
REGULATION & ENFORCEMENT
Main Takeaways
DEVELOP AND IMPLEMENT A SET OF REGULATORY TOOLS TO OPERATIONALIZE SAFETY BY DESIGN:
Such tools must be interoperable across jurisdictions, given the borderless character of the foundation models
value chain. Regulators should invest in regulatory sandboxes to rigorously test and refine foundation models
pre-deployment. This effort should be informed by comprehensive regulatory guidance, global metrics,
industry-wide standards, and interoperable benchmarks.
| 47 |
REGULATION & ENFORCEMENT
Discussion
Over the course of 2023, increasing market demand Dr. Lynne Parker offered insights into how the EU's
for AI has pushed the public interest to the margins. regulatory path might have influenced the U.S.
However, regulatory developments have provided a government’s approach to AI governance. When it
democratic path to balance stakeholders’ power and comes to narrow systems, the sectoral-based
protect the public interest: the European Union's AI approach discussed in the EU resonates with the US
Act. In its final stages, it represents a robust effort to regulatory structure, comprising different agencies
regulate AI models increasingly prevalent in with expertise in different economic sectors. Those
consumer markets and impose guardrails that agencies are already studying or investigating the
uphold safety and fundamental rights. Nevertheless, impact of AI within their mandates. Dr. Parker
ongoing work is necessary to ensure the Act's suggested that federal institutions are well-
strength and enforceability and to transmit regulatory equipped to take up a two-pronged approach:
lessons to other jurisdictions. The role of institutions sector-specific regulations coupled with a
responsible for enforcement at the national level— comprehensive AI governance framework, such as
such as the EU AI Act’s European AI Office—is the work the U.S. executive branch has been
crucial in this regard. This fireside chat focused on advancing since the publication of the Blueprint for
the critical role of coordination between AI policy an AI Bill of Rights and, more recently, Executive
enforcement bodies, particularly across those of the Order 14110.
EU and the US.
Moving the discussion from executive powers to
In opening remarks to the panel, MEP Dragos legislative powers, Marek Havrda articulated the
Tudorache shared insights on the trilogue process challenges in transitioning AI legislation from theory
for the EU AI Act, underscoring the importance of to practice, with a particular focus on the role of a
learning from the EU regulatory journey and European AI Office. This institution would have a
reflecting on upcoming challenges for enforcement. central role in gathering intelligence around the
The international community must ramp up a regulatory learning stemming from the oversight in
coordinated approach to maximize countries’ member states’ jurisdictions and from regulatory
capacity for enforcement, be it within the EU AI Act sandboxes—should the provisions be approved in
jurisdictions or beyond EU borders as regulations the final text of the EU AI Act. Finally, looking into the
emerge in other countries. Executive’s role, Havrda underscored the
importance of coordination among national AI
offices
| 48 |
REGULATION & ENFORCEMENT
for consistent enforcement across the EU, overshadowing complex yet crucial AI discussions.
especially with respect to high-risk systems. If Foundation models have profound implications on a
successful, he remarked, this model could be wide array of downstream applications affecting the
extended to international collaborations. global population. Discussions tend to focus on the
downstream impact of AI applications on society,
Shifting the lens to the global stage, the G7 such as with the deployment of surveillance
Hiroshima Code of Conduct was identified as a technologies, but, speakers remarked, it is urgent to
significant step in guiding the behavior of foundation rein in corporations’ actions during the design and
model developers. However, there remains an development of AI systems, rather than focusing
urgent need for binding regulations, like the EU AI solely on deployment.
Act, to manage and mitigate systemic risks
effectively. As the world turns its eyes to the profound and
borderless impact of some foundation models, the
As we broaden the debate around AI governance concept of “safety by design” is crucial in mitigating
and enforcement to include underrepresented systemic risks at the initial stages of AI
voices and increase democratic participation, development..
speakers discussed that we must be cautious about
AI safety
42%
| 49
47 |
REMARKS BY CO-HOSTS
Remarks by Co-hosts
The Ambassador for Greece in the United States highlighted Greece’s leadership in advancing the rule of law
in the age of AI. As AI will bring about a transformative moment for humanity, it will be crucial to leverage its
potential to promote stronger democracies. The Ambassador also acknowledged the challenges
accompanying such technology, noting that AI must also be tamed so as not to endanger the values of
democracy and the rule of law.
| 50 |
REMARKS BY CO-HOSTS
Drawing from the Hellenic democratic tradition, Stefanos Vitoratos stressed that the preservation of our
fundamental values should be prioritized and reflected in the core of AI development endeavors.
Acknowledging the importance of democratic debate, Vitoratos commended the kaleidoscope of perspectives
presented by legislators, policymakers, law practitioners, civil society representatives, and developers in the
quest to govern AI across jurisdictions. He expressed concern with the rise of national security discourses
that may exclude AI development from public scrutiny. Vitoratos stressed that stakeholders across
jurisdictions hold a common task of forging new mechanisms and institutional solutions to safeguard the rule of
law and steer AI development and deployment toward the public interest.
The President of the George Washington (GW) University, Dr. Granberg stressed the importance of cross-
stakeholder solutions for AI governance, and the critical role conversations such as The Athens Roundtable
have in informing both policy and academic priorities in this field. As powerful agents of change, the role of
academics in such conversations is to break down disciplinary silos to protect and enhance human
experience and fundamental rights in the age of AI.
Vice Provost Pamela Norris emphasized academia's pivotal role in establishing guardrails and good
governance for AI systems, particularly for future generations. Dr. Norris stressed academics’ unique role in
advising policymakers. She called for rigorous, evidence-based research to inform policy and foster trustworthy
AI alongside democratic values. Dr. Norris also underscored the need for training the next generation of AI
professionals to develop AI that is safe and trustworthy, with the potential to positively transform
communities.
| 51 |
CONCLUSION
Conclusion
The fifth edition of The Athens Roundtable shed light technological progress does not come at the cost of
on the urgency of adopting a multifaceted approach societal well-being and democratic values.
to AI governance—one that encompasses
comprehensive regulations, precise definitions and Moving forward, The Future Society's role in
metrics, and robust enforcement mechanisms. facilitating dialogues and spearheading
collaborations for institutional innovation becomes
The recommendations emerging from the dialogue more crucial than ever. The insights and policy
point towards a future where AI development is not recommendations from the Athens Roundtable
only governed by the principles of safety and provide a roadmap for action, but they also serve as
responsibility, but also steered by a harmonized a reminder of the challenges ahead. The goal is
legal framework that transcends borders and clear: guide AI development in a manner that
sectors. To achieve this, a collaborative effort is upholds fundamental rights and the rule of law.
required, bringing together policymakers, Achieving this will require continued commitment,
developers, civil society, and impacted communities. creativity, and cooperation from all stakeholders
Beyond coordination, we must develop liability involved. We look forward to collaborating with
frameworks and governance regimes for general- Roundtable partners, participants, and readers of this
purpose foundation models that are adaptive and report in furthering our mission of aligning artificial
agile. These steps are critical in fortifying our intelligence through better governance in the years
democratic institutions to be resilient to the ahead.
disruptive potential of AI—ensuring that
| 52 |
Contact Us!
GENERAL | [email protected]
PRESS | [email protected]
www.thefuturesociety.org