0% found this document useful (0 votes)
43 views8 pages

EY Guidance On AI

Uploaded by

work.rakyaa
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
43 views8 pages

EY Guidance On AI

Uploaded by

work.rakyaa
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

AI’s potential to create positive human impact will depend on a responsible, human-centered approach

that focuses on creating value for all.

In brief

● Legislators are developing distinctly different approaches on policy to regulate AI.


● EY research has identified six common trends in AI oversight.
● Companies can take several actions to stay ahead of the rapidly evolving AI regulatory
landscape.

This article was written by Nicola Morini Bianzino, EY Global Chief Technology Officer; Marie-Laure
Delarue, Global Vice Chair, Assurance; Shawn Maher, EY Global Vice Chair, Public Policy; and
Ansgar Koene, EY Global AI Ethics and Regulatory Leader; with contributions by Katie Kummer,
Global Deputy Vice Chair, Public Policy; and Fatima Hassan-Szlamka, Associate Director, Global
Public Policy.

The accelerating capabilities of Generative Artificial Intelligence (GenAI) — including large language
models (LLM) — as well as systems using real-time geolocation data, facial recognition and advanced
cognitive processing, have pushed AI regulation to the top of policy makers’ inboxes.

It isn’t simple. In Europe, for example, while some member countries want to liberalize the use of facial
recognition by their police forces, the EU Parliament wants to impose tight restrictions as part of the AI
Act.1 In another debate on AI legislation, the Indian Ministry of Electronics and IT published a strong
statement in April, opting against AI regulation and stating that India “is implementing necessary
policies and infrastructure measures to cultivate a robust AI sector, but does not intend to introduce
legislation to regulate its growth.”2 Yet in May, the IT Minister announced India is planning to regulate
AI platforms like ChatGPT and is “considering a regulatory framework for AI, which includes areas
related to bias of algorithms and copyrights.”3 Similarly, while the US is not likely to pass new federal
legislation on AI any time soon, regulators like the Federal Trade Commission (FTC) have responded to
public concerns about the impact of Generative AI, by opening expansive investigations into some AI
platforms.4

AI is transforming a diverse range of industries, from finance and manufacturing to agriculture and
healthcare, by enhancing their operations and reshaping the nature of work. AI is enabling smarter fleet
management and logistics, optimizing energy forecasting, creating more efficient use of hospital beds by
analyzing patient data and predictive modeling, improving quality control in advanced manufacturing,
and creating personalized consumer experiences. It is also being adopted by governments that see its
ability to deliver better service to citizens at lower cost to taxpayers. With global private sectors
investing in AI, the investment levels are now 18 times higher than in 2013.5 AI is potentially a powerful
driver of economic growth and a key enabler of public services.

However, the risks and unintended consequences of GenAI are also real. A text-generation engine that
can convincingly imitate a range of registers is open to misuse; voice-imitation software can mimic an
individual’s speech patterns well enough to convince a bank, workplace or friend. Chatbots can cheat at
tests. AI platforms can reinforce and perpetuate historical human biases (e.g., based on gender, race or
sexual orientation), undermine personal rights, compromise data security, produce misinformation and
disinformation, destabilize the financial system and cause other forms of disruption globally. The stakes
are high.

Legislators, regulators and standard setters are starting to develop frameworks to maximize AI’s
benefits to society while mitigating its risks. These frameworks need to be resilient, transparent and
equitable. To provide a snapshot of the evolving regulatory landscape, the EY organization (EY) has
analyzed the regulatory approaches of eight jurisdictions: Canada, China, the European Union (EU),
Japan, Korea, Singapore, the United Kingdom (UK) and the United States (US). The rules and policy
initiatives were sourced from the Organization for Economic Co-operation and Development (OECD)
AI policy observatory6 and are listed in the appendix to the full report.

Six regulatory trends in Artificial Intelligence

Recognizing that each jurisdiction has taken a different regulatory approach, in line with different
cultural norms and legislative contexts, there are six areas of cohesion that unite under the broad
principle of mitigating the potential harms of AI while enabling its use for the economic and social
benefit of citizens. These areas of unity provide strong fundamentals on which detailed regulations can
be built.

1. Core principles: The AI regulation and guidance under consideration is consistent with the core
principles for AI as defined by the OECD and endorsed by the G207. These include respect for human
rights, sustainability, transparency and strong risk management.
2. Risk-based approach: These jurisdictions are taking a risk-based approach to AI regulation.
What that means is that they are tailoring their AI regulations to the perceived risks around AI to core
values like privacy, non-discrimination, transparency and security. This “tailoring” follows the principle
that compliance obligations should be proportionate to the level of risk (low risk means no or very few
obligations; high risks mean significant and strict obligations).
3. Sector-agnostic and sector-specific: Because of the varying use cases of AI, some jurisdictions
are focusing on the need for sector-specific rules, in addition to sector-agnostic regulation.
4. Policy alignment: Jurisdictions are undertaking AI-related rulemaking within the context of
other digital policy priorities such as cybersecurity, data privacy and intellectual property protection –
with the EU taking the most comprehensive approach.
5. Private-sector collaboration: Many of these jurisdictions are using regulatory sandboxes as a
tool for the private sector to collaborate with policymakers to develop rules that meet the core objective
of promoting safe and ethical AI, as well as to consider the implications of higher-risk innovation
associated with AI where closer oversight may be appropriate.
6. International collaboration: Driven by a shared concern for the fundamental uncertainties
regarding the risks to safety and security posed by powerful new generative and general purpose AI
systems, countries are pursuing international collaboration towards understanding and addressing
these risks.

Further considerations on AI for policymakers


Other factors to consider in AI policy development include:

● Ensuring regulators have access to sufficient subject matter expertise to successfully implement,
monitor and enforce these policies
● Ensuring policy clarity, if the intent of rulemaking is to regulate risks arising from the
technology itself (e.g., properties such as natural language processing or facial recognition) or from how
the AI technology is used (e.g., the application of AI in hiring processes) or both
● Examining the extent to which risk management policies and procedures, as well as the
responsibility for compliance, should apply to third-party vendors supplying AI-related products and
services

In addition, policymakers should, to the extent possible, engage in multilateral processes to make AI
rules among jurisdictions interoperable and comparable, in order to minimize the risks associated with
regulatory arbitrage – that are particularly significant when considering rules governing the use of a
transnational technology like AI.
Action steps for companies

For company leaders, understanding the core principles underlying AI rules, even if those rules may not
presently apply to them, can serve to instill confidence by customers and regulators in their use of AI
and thereby potentially provide a competitive advantage in the marketplace. It can also help companies
anticipate the governance needs and compliance requirements that may apply to their development and
use of AI, making them more agile.

Based on the identified trends, there are at least three actions businesses can take now to remain a step
ahead of the rapidly evolving AI regulatory landscape.

1. Understand AI regulations that are in effect within the markets in which you operate. You can
align your internal AI policies with those regulations and any associated supervisory standards.
2. Establish robust and clear governance and risk management structures and protocols as well as,
to the extent where appropriate, accountability mechanisms to enhance how you manage AI
technologies.
3. Engage in dialogue with public sector officials and others to better understand the evolving
regulatory landscape, as well as to provide information and insights that might be useful to
policymakers.

For governance approaches to strike the right balance between government oversight and innovation,
it’s important that companies, policymakers and other stakeholders engage in open conversations. All
these parties are testing the waters and working to find new possibilities that are being enabled by AI.
New rules will be needed. Fortunately, as our review shows, there is wide agreement among countries
on the foundational principles to govern the use of AI. At this unique moment of possibility and peril,
now is the time to cooperate on turning those principles into practice.

What EY can do for you


Our suite of strategy, design, architecture, data, systems integration, program operations and risk
services is combined with our deep domain and sector knowledge.

To realize value from AI, you need innovative integration and orchestration of robotic, intelligent and
autonomous capabilities at the system level. This transformation can occur in five non-discrete
domains:

● Insights: Discover deeper insights, faster, in ways that augment human cognition
● Performance: Design systems that learn from data and experience to improve outcomes over
time
● Automation: Leverage robotic, intelligent and autonomous capabilities to transform operations
through automation
● Experiences: Enhance human experiences using systems that predict, sense, learn and move
● Trust: Design, build and monitor automated systems to promote and sustain trust

Our Consulting team is ready to help you fully realize business benefits from AI.

First, we’ll demystify and help your team understand the value and risks, pragmatically defining the
capabilities needed for your organization to adopt and scale AI. Then we’ll work with you to incorporate
the robotic, intelligent and autonomous capabilities that will transform and innovate the way you operate
and compete in the Transformative Age.

Six steps to confidently manage data privacy in


the age of AI
Organizations need to address increased privacy and regulatory concerns raised by AI.

In brief

● With growing use of AI, there is also the potential for increased data privacy risk.
● Clear and consistent regulations on data use in AI have yet to be developed.
● Organizations should proactively take steps to maintain data privacy commitments and
obligations as they use AI.

The speed with which artificial intelligence (AI), generative AI (GenAI) and large language models
(LLMs) are being adopted is producing an increase in risks and unintended consequences relating to
data privacy and data ethics.
New LLMs are processing vast swathes of data – often taken from many sources without permission.
This is causing understandable concerns over citizens’ privacy rights, as well as the potential for AI to
make biased decisions about loans, job applications, dating sites and even criminal cases.

Things are moving quickly and many regulatory authorities are just starting to develop frameworks to
maximize AI’s benefits to society while mitigating its risks. These frameworks need to be resilient,
transparent and equitable. While the EU has taken the most comprehensive approach with new
and anticipated legislation on AI, efforts to understand and agree how AI should be regulated have been
largely uncoordinated. So it’s little surprise that leading industry figures are calling on governments to
step up and play a greater role in regulating the use of AI.1 To provide a snapshot of the evolving
regulatory landscape, the EY organization has analyzed the regulatory approaches of eight
jurisdictions: Canada, China, the European Union (EU), Japan, Korea, Singapore, the United
Kingdom (UK) and the United States (US).

Ideally, businesses will develop adaptive strategies tailored to the rapidly changing AI environment;
however, this may be difficult as many businesses are at the early stages of AI maturity. This creates a
challenging situation for businesses wanting to progress but also needing to maintain regulatory
compliance and customer confidence in how the business is handling data. “There’s tension between
being first versus part of the pack. Organizations should implement an agile controls framework that
allows innovation but protects the organization and its customers as regulations evolve,” notes Gita
Shivarattan, UK Head of Data Protection Law Services, Ernst & Young LLP. In this article, we look at six
key steps data privacy officers can take to help organizations stay true to their priorities and obligations
around data privacy and ethics as they deploy new technologies like AI.
There’s tension between being first versus part of the pack. Organizations should implement an agile
controls framework that allows innovation but protects the organization and its customers.

Gita Shivarattan
Partner, UK Head of Data Protection Law Services, Ernst & Young LLP
There are six steps data privacy officers can take to help your organization become more agile in
protecting its priorities and obligations around data privacy as it expands its use of AI.

1. Get your privacy risk compliance story in order

Assess the maturity of your privacy risk controls and overall privacy compliance to create a strong
foundation for AI governance. It’s crucial to articulate a compelling compliance story to your own
people and regulators. Modern privacy laws are being enacted in different jurisdictions at pace, but
most (if not all) modern privacy laws contain a big dose of the General Data Protection Regulation
(GDPR). So, when looking at privacy risk and ethics, make sure to “leverage the lessons learned from
GDPR,” advises Matt Whalley, Partner, Law, Ernst & Young LLP. “Ensure you document relevant
elements of your decision-making in case you are asked to justify this in the future.” Ultimately, use
your story to show how compliance builds customer confidence, avoids reputational damage and
financial penalties while benefitting the top line by enabling AI innovation and data-driven
decision-making while managing risk.
Leverage the lessons learned from GDPR. Ensure you document relevant elements of your
decision-making in case you are asked to justify this in the future.
Matt Whalley
Partner, Law, Ernst & Young LLP
2. Set up risk controls, governance and accountability

Risk controls and governance frameworks can help organizations build confidence in their AI
applications in the absence of clear regulation. Yet, according to a 2022 EY study, only 35% of
organizations have an enterprise-wide governance strategy for AI.

A robust AI governance program should cover the data, model, process and output while striking a
balance between innovation and responsibility. It should enable your product development teams to
experiment without stepping into high-risk areas that could put regulators on notice and damage
customer confidence. Also, “AI models should be transparent,” notes Shivarattan, “so that regulators
and citizens can see where data comes from and how it’s processed, to assure privacy and avoid bias.”
Most important, governance frameworks should ensure responsibility and accountability for AI systems
and their outcomes by:

● Establishing clear procedures for an acceptable use of AI.


● Educating all stakeholders responsible for driving and managing the use of data.
● Keeping auditable records of any decisions relating to AI, including privacy impact assessments
and data protection impact assessments.
● Defining a procedure to manage bias if it is detected.

AI models should be transparent, so that regulators and citizens can see where data comes from and
how it’s processed, to assure privacy and avoid bias.

3. Operationalize data ethics

There is a clear interlock between data ethics, data privacy and responsible AI. “Data ethics compels
organizations to look beyond legal permissibility and commercial strategy in the ‘can we, should we’
decisions about data use,” according to Whalley.

After reviewing existing policies and operating models, identifying the key principles and policies to
follow is one of the first steps organizations should take in operationalizing data ethics. Technology can
be used to embed these principles and policies into front-line decision-making to help ensure they are
considered together with regulatory obligations.

The principles may originate from within the organization itself as an extension of pre-existing values or
employee sentiment. For example, to define an acceptable use of AI, you could follow similar steps to
assessing what is a reasonable use of personal data by determining whether there is a legitimate
interest, whether the benefits of the outcome outweigh the individual right to autonomy and whether
you have given sufficient weight to the potential for individuals to suffer (unexpected) negative
outcomes.
The principles may also arise from other sources including third-party organizations or customer
sentiment. For example, in the absence of clear regulatory direction, some organizations with advanced
AI models are taking steps to identify the standards that could apply across industries.2
Data ethics compels organizations to look beyond legal permissibility and commercial strategy in the
‘can we, should we’ decisions about data use.

Matt Whalley
Partner, Law, Ernst & Young LLP
The best path forward may not always be clear, so educating stakeholders on the policies and principles
is critical, and a “trade-off framework” should be developed to help work through conflicts.

Lastly, having line of sight into data use in connection with AI across the organization is critical for
monitoring data ethics compliance. Since data privacy concerns extend to suppliers and other third
parties, they should also be contractually required to disclose when AI is used in any solutions or
services.

4. Report data privacy and ethics risks at board level

Stakeholders will need to work together to help the board understand and mitigate the risks associated
with AI and make strategic decisions within an overarching ethical framework. Responsibility is often
divided between the Data Protection Officer (DPO) or Chief Privacy Officer (CPO) – who will possibly
have responsibility for data ethics – and the Chief Data Officer (CDO). Some organizations may want to
go further and appoint a Chief AI Officer (CAIO). Together, these senior leaders will need to help ensure
the right checks and balances are in place around the ethical uses of data in AI.

5. Expand horizon scanning to include customer sentiment

In April 2023, Italy became the first Western country to (temporarily) block an advanced GenAI chatbot
amidst concerns over the mass collection and storage of personal data.3 Japan’s privacy watchdog also
spoke out, warning it not to collect sensitive data without people's permission.4
Such actions can, at a stroke, destroy the value of investments in AI. Systematic forward-looking
analysis or horizon scanning is vital to reduce the uncertainty of regulatory change and help avoid
unexpected developments. But it’s not just about regulations – companies also need to stay in touch
with what customers are thinking about AI usage and data privacy. Stay ahead of regulators by talking
to your customers regularly to understand acceptable limits and “no-go” areas.
6. Invest in compliance and training

In a relatively short time, interest in using AI has multiplied, putting pressure on employees across
organizations to understand the implications of its use and the impact on data privacy. Many
organizations may have to hire additional specialists as well as train and upskill existing compliance
teams, combining on-the-job with theoretical studies.
It’s especially important to train employees in AI-facing roles such as developers, reviewers and data
scientists, helping them understand the limitations of AI, where AI is prone to error, appropriate ethics
and how to complement AI with intervention from a person. In addition to operational guidance on
implementing AI controls, you will need to create a mindset that balances innovation with an
appreciation of data privacy and ethics.

Summary

The acceleration in adoption of AI is increasing the risk of non-compliance with data privacy
regulations and the likelihood that your organization could inadvertently damage customer confidence.
By taking steps to help assure data privacy and responsible deployment in the use of AI, organizations
can move forward with innovation, win the confidence of customers and maintain compliance.

You might also like