What Is AI Ethics
What Is AI Ethics
Ethics is a set of moral principles which help us discern between right and
wrong. AI ethics is a multidisciplinary field that studies how to optimize
AI's beneficial impact while reducing risks and adverse outcomes.
Examples of AI ethics issues include data responsibility and privacy,
fairness, explainability, robustness, transparency, environmental
sustainability, inclusion, moral agency, value alignment, accountability,
trust, and technology misuse.
With the emergence of big data, companies have increased their focus to
drive automation and data-driven decision-making across their
organizations. While the intention there is usually, if not always, to
improve business outcomes, companies are experiencing unforeseen
consequences in some of their AI applications, particularly due to poor
upfront research design and biased datasets.
While rules and protocols develop to manage the use of AI, the academic
community has leveraged the Belmont Report (link resides outside
ibm.com) (PDF, 121 KB) as a means to guide ethics within experimental
research and algorithmic development. There are main three principles
that came out of the Belmont Report that serve as a guide for experiment
and algorithm design, which are:
As businesses become more aware of the risks with AI, they’ve also
become more active this discussion around AI ethics and values. For
example, last year IBM’s CEO Arvind Krishna shared that IBM has sunset
its general purpose IBM facial recognition and analysis products,
emphasizing that “IBM firmly opposes and will not condone uses of any
technology, including facial recognition technology offered by other
vendors, for mass surveillance, racial profiling, violations of basic human
rights and freedoms, or any purpose which is not consistent with our
values and Principles of Trust and Transparency.”
Accountability
There is no universal, overarching legislation that regulates AI practices,
but many countries and states are working to develop and implement
them locally. Some pieces of AI regulation are in place today, with many
more forthcoming. To fill the gap, ethical frameworks have emerged as
part of a collaboration between ethicists and researchers to govern the
construction and distribution of AI models within society. However, at the
moment, these only serve to guide, and research (link resides outside
ibm.com) (PDF, 1 MB) shows that the combination of distributed
responsibility and lack of foresight into potential consequences isn’t
necessarily conducive to preventing harm to society.
How to establish AI ethics
Since ethical standards are not the primary concern of data engineers and
data scientists in the private sector, a number of organizations have
emerged to promote ethical conduct in the field of artificial intelligence.
For those seeking more information, the following organizations and
projects provide resources for enacting AI ethics:
IBM has also developed five pillars to guide the responsible adoption of AI
technologies. These include:
These principles and focus areas form the foundation of our approach to
AI ethics. To learn more about IBM’s views around ethics and artificial
intelligence, read more here.