Machine Learning
Machine Learning
3. Semi-supervised learning
Semi-supervised learning offers a happy medium
between supervised and unsupervised learning.
During training, it uses a smaller labeled data set
to guide classification and feature extraction from
a larger, unlabeled data set. Semi-supervised
learning can solve the problem of not having
enough labeled data for a supervised learning
algorithm. It also helps if it’s too costly to label
enough data.
Common machine
learning algorithms -
A number of machine learning algorithms are
commonly used. These include:
1.Neural networks
2.Linear regression
3.Logistic regression
4.Clustering
5.Decision trees
6.Random forests
Challenges of machine
learning
As machine learning technology has
developed, it has certainly made our lives
easier. However, implementing machine
learning in businesses has also raised a
number of ethical concerns about AI
technologies. Some of these include:
Technological singularity
While this topic garners a lot of public
attention, many researchers are not concerned
with the idea of AI surpassing human
intelligence in the near future. Technological
singularity is also referred to as strong AI or
superintelligence. Philosopher Nick Bostrum
defines superintelligence as “any intellect that
vastly outperforms the best human brains in
practically every field, including scientific
creativity, general wisdom, and social skills.”
Despite the fact that superintelligence is not
imminent in society, the idea of it raises some
interesting questions as we consider the use of
autonomous systems, like self-driving cars. It’s
unrealistic to think that a driverless car would
never have an accident, but who is responsible
and liable under those circumstances? Should
we still develop autonomous vehicles, or do we
limit this technology to semi-autonomous
vehicles which help people drive safely? The
jury is still out on this, but these are the types
of ethical debates that are occurring as new,
innovative AI technology develops.
AI impact on jobs
While a lot of public perception of artificial
intelligence centers around job losses, this
concern should probably be reframed. With
every disruptive, new technology, we see that
the market demand for specific job roles shifts.
For example, when we look at the automotive
industry, many manufacturers, like GM, are
shifting to focus on electric vehicle production
to align with green initiatives. The energy
industry isn’t going away, but the source of
energy is shifting from a fuel economy to an
electric one.
In a similar way, artificial intelligence will shift
the demand for jobs to other areas. There will
need to be individuals to help manage AI
systems. There will still need to be people to
address more complex problems within the
industries that are most likely to be affected by
job demand shifts, such as customer service.
The biggest challenge with artificial intelligence
and its effect on the job market will be helping
people to transition to new roles that are in
demand.
Privacy
Privacy tends to be discussed in the context of
data privacy, data protection, and data
security. These concerns have allowed
policymakers to make more strides in recent
years. For example, in 2016, GDPR legislation
was created to protect the personal data of
people in the European Union and European
Economic Area, giving individuals more control
of their data. In the United States, individual
states are developing policies, such as the
California Consumer Privacy Act (CCPA), which
was introduced in 2018 and requires
businesses to inform consumers about the
collection of their data. Legislation such as this
has forced companies to rethink how they store
and use personally identifiable information
(PII). As a result, investments in security have
become an increasing priority for businesses as
they seek to eliminate any vulnerabilities and
opportunities for surveillance, hacking, and
cyberattacks.
Bias and discrimination
Instances of bias and discrimination across a
number of machine learning systems have
raised many ethical questions regarding the
use of artificial intelligence. How can we
safeguard against bias and discrimination
when the training data itself may be generated
by biased human processes? While companies
typically have good intentions for their
automation efforts, Reuters (link resides
outside ibm.com)2 highlights some of the
unforeseen consequences of incorporating AI
into hiring practices. In their effort to automate
and simplify a process, Amazon unintentionally
discriminated against job candidates by gender
for technical roles, and the company ultimately
had to scrap the project. Harvard Business
Review (link resides outside ibm.com)3 has
raised other pointed questions about the use of
AI in hiring practices, such as what data you
should be able to use when evaluating a
candidate for a role.
Bias and discrimination aren’t limited to the
human resources function either; they can be
found in a number of applications from facial
recognition software to social media
algorithms.
As businesses become more aware of the risks
with AI, they’ve also become more active in
this discussion around AI ethics and values. For
example, IBM has sunset its general purpose
facial recognition and analysis products. IBM
CEO Arvind Krishna wrote: “IBM firmly opposes
and will not condone uses of any technology,
including facial recognition technology offered
by other vendors, for mass surveillance, racial
profiling, violations of basic human rights and
freedoms, or any purpose which is not
consistent with our values and Principles of
Trust and Transparency.”
Accountability
Since there isn’t significant legislation to
regulate AI practices, there is no real
enforcement mechanism to ensure that ethical
AI is practiced. The current incentives for
companies to be ethical are the negative
repercussions of an unethical AI system on the
bottom line. To fill the gap, ethical frameworks
have emerged as part of a collaboration
between ethicists and researchers to govern
the construction and distribution of AI models
within society. However, at the moment, these
only serve to guide. Some research (link
resides outside ibm.com)4 shows that the
combination of distributed responsibility and a
lack of foresight into potential consequences
aren’t conducive to preventing harm to society.