0% found this document useful (0 votes)
23 views3 pages

Lesson 5: Big Data and Analytics Related Articles

Here are 3 potential areas of concern in big data ethics that outline the potential for immoral use of data: 1. Privacy and data protection - Large datasets can potentially reveal sensitive personal details and compromise individuals' privacy if not properly anonymized or secured. 2. Algorithmic bias - Analytics tools are only as unbiased as the data used to train them. Models may inadvertently discriminate or disadvantage certain groups if the underlying data reflects real-world biases. 3. Manipulation and control - Some fear that pervasive data collection could be used to surreptitiously influence or even manipulate people's choices and behaviors for profit or political gain. Transparency around data usage is important to prevent potential abuses of power.

Uploaded by

Myriam Becheikh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
23 views3 pages

Lesson 5: Big Data and Analytics Related Articles

Here are 3 potential areas of concern in big data ethics that outline the potential for immoral use of data: 1. Privacy and data protection - Large datasets can potentially reveal sensitive personal details and compromise individuals' privacy if not properly anonymized or secured. 2. Algorithmic bias - Analytics tools are only as unbiased as the data used to train them. Models may inadvertently discriminate or disadvantage certain groups if the underlying data reflects real-world biases. 3. Manipulation and control - Some fear that pervasive data collection could be used to surreptitiously influence or even manipulate people's choices and behaviors for profit or political gain. Transparency around data usage is important to prevent potential abuses of power.

Uploaded by

Myriam Becheikh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 3

L2GEA 2022-2023

Instructor : Rihab Khrifech

Lesson 5: Big Data and Analytics

Related articles:
• https://knowledge.wharton.upenn.edu/article/how-data-analytics-can-help-deliver-social-
good/
• https://knowledge.wharton.upenn.edu/podcast/knowledge-at-wharton-podcast/how-to-get-
the-most-from-your-customer-data/
Common Core Standard(s):
• Reading Informational Text
• Speaking and Listening
Purpose:
• Students should be able to understand key terminology.
• They should be able to understand how data and analytics is being leveraged by big
companies.
• They should also be able to identify the main areas of concern in big data ethics.

Other Resources/Materials:
Internet capability; Video

Introduction(5mins)
What is Big Data ¿ why do big companies have a big data and analytics department ¿ what
strategies would they develop based on the data they collect¿

Watch video https://www.youtube.com/watch?v=rqP5k4DGfqo

How Do Customers Feel About Algorithms?


December 13, 2022 • 9 min listen

Many managers worry that algorithms alienate those who would rather deal with a real person than
a robot. New research from Wharton’s Stefano Puntoni looks at how the attitudes of customers are
influenced by algorithmic versus human decision-making.
Customers feel good about a company when its representatives make decisions in their favor, such
as approving their loan application or gold member status. But when an algorithm reaches the same
favorable conclusion, those warm and fuzzy feelings tend to fade.
This surprising contradiction is revealed in a new paper that examines how customers react
differently depending on whether a computer or a fellow human being decides their fate.
In the study, Wharton marketing professor Stefano Puntoni and his colleagues found that customers
are happiest when they receive a positive decision from a person, less happy when the positive
decision is made by an algorithm, and equally unhappy with both man and machine when the news
is bad.
“What’s interesting is that if you talk to companies, they’ll often tell you that they’re reluctant to let
algorithms make decisions because they are worried about what would happen to customers when
things go wrong. But we don’t actually find that. The negative consequences of using algorithms for
L2GEA 2022-2023
Instructor : Rihab Khrifech

companies seem to be, in fact, when the news is good,” Puntoni said during an interview with
Wharton Business Daily on SiriusXM.
The researchers believe the results can be explained through attribution theory, a psychology term
that refers to how people translate their own experiences and perceptions to make sense of their
place in the world. Simply put, people have a psychological need to feel good about themselves,
and it helps to internalize a good decision and externalize a bad one. When a company
representative greenlights a request, customers attribute that to their own exemplary behavior, social
status, excellent credit score, or other value-adds to the firm. That’s harder to do when the decision-
maker is a bot.
“These decisions are diagnostic of some characteristic of ourselves,” Puntoni said. “People find it
easier to internalize the good decision when the decision was made by a person. Now they get what
they want, and it feels better to them that it was a human [deciding] than if it was an algorithm.”
Consumers externalize bad outcomes to protect their feelings of self-worth.
“When they get negative news, the story is different. Then we find that customers blame the
decision-maker for why they did not get what they wanted,” Puntoni said. “In that case, they will do
so no matter who or what made the decision. They just use different strategies to externalize the
outcome.”
A New Look at Old Behavior
The paper, which documents 10 separate studies that the professors used to test their theory, is novel
in its approach. There’s already plenty of anecdotal and scientific evidence that customers have an
aversion to algorithms. When given the choice, consumers don’t usually prefer chatbots to manage
a service complaint and would rather not use software to elicit medical advice or predict stock
prices.
But the professors wanted to know what happens when customers don’t have a choice. It’s the first
paper to examine how the attitudes of customers are influenced by algorithmic versus human
decision-making.
“Our research context is of managerial importance,” the authors wrote, noting that their findings go
against the conventional belief that bots are bad for business. As companies increasingly deploy
algorithms to streamline tasks, drive down costs, and increase efficiency, managers increasingly
worry that using algorithms will alienate customers.
Puntoni also pointed out a bit of irony in the findings: “If you think about it for a second, algorithms
are expected to be more objective and more unbiased than humans. So, if algorithms say you
deserve it, maybe [that is] an even better inference you could make about yourself,” Puntoni said.
“But we don’t find people thinking like that. They just react more positively to a human giving
good news than an algorithm.”
Humanize the Bot
What can companies do to mitigate the negative consequences of algorithms? According to the
study, one solution is to humanize the bot. Anthropomorphizing the algorithm to make it seem more
L2GEA 2022-2023
Instructor : Rihab Khrifech

like a person may leave customers feeling better about the outcome when they receive positive
news.
The scholars tested this idea through one study in which participants were told they were submitting
applications to a country club. Depending on the condition, the application was reviewed by a robot,
a real person named Sam, or an algorithm depicting a cartoon man or woman named Sam. Although
all the applications were accepted, the participants felt better about the club when dealing with the
real person and worse when dealing with the bot. But their feelings about the person and the human-
like algorithm were similarly positive.
The paper notes that many companies are already experimenting with strategies that combine both
algorithms and human decision-making. But the authors contend that it isn’t enough to have
employees merely observing these automated functions; representatives need to be actively
involved if they want better customer feedback.
Puntoni also offered cautionary guidance to companies that rely on algorithms to perform human
resources tasks such as shortlisting job candidates or judging performance. Deploying algorithms
for those tasks can have repercussions that ripple across the company.
“In other work in progress, we find that people feel a bit alienated and objectified when an
algorithm is put in place of deciding how good the employee is, and that may have consequences
for the way the employee feels about the company and co-workers,” he said.
1-Answer the following questions :
a- How are algorithms being deployed by companies¿
b- What are the shortcomings of such technology¿
c- What’s a perfect managerial approach¿
2- Find Synonyms :
Give permission = …………… , disinclination or abhorrence = ……………….,
alleviate=………………, undulate=……………

Writing :
What main areas of concern in big data ethics that outline the potential for immoral use of data¿

You might also like