0% found this document useful (0 votes)
25 views4 pages

Case Study: Facebook

The document discusses issues with Facebook's business model and failure to control the spread of misinformation, hate speech, and other harmful content on its platform. Critics argue that Facebook prioritizes user engagement and profits over filtering out unethical content. This has real-world consequences and impacts elections, public health, and global affairs. Recommendations are made for Facebook to eliminate violences, focus more on providing correct information, and be more responsive using AI technology to filter harmful content.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
25 views4 pages

Case Study: Facebook

The document discusses issues with Facebook's business model and failure to control the spread of misinformation, hate speech, and other harmful content on its platform. Critics argue that Facebook prioritizes user engagement and profits over filtering out unethical content. This has real-world consequences and impacts elections, public health, and global affairs. Recommendations are made for Facebook to eliminate violences, focus more on providing correct information, and be more responsive using AI technology to filter harmful content.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 4

Case Study: Facebook

Facebook’s stated mission is “to give people the power to build community and bring
the world closer together.” But a deeper look at their business model suggests that it is
far more profitable to drive us apart.

 Does Facebook actually filter out “disinformation that endangers public safety or
the integrity of the democratic process?”
 Does Facebook actually filter out cyberbullying?
 Does Facebook actually filter out calls to violence?
 Does Facebook actually filter out hate speech?
 Does Facebook actually filter out content?
 Does Facebook actually filter out “disinformation that endangers public safety or
the integrity of the democratic process?

Critiques argue that Facebook is profiting from the spread of misinformation—are


“factually wrong. Some says that it is part of business strategy to gain more profits. It
has been a question for us if the Facebook met their Social Responsibility or ethical and
legal obligations by providing correct information to the public.

Facebook has had a profound impact on our access to ideas, information, and one
another. It has unprecedented global reach, and in many markets serves as a de-facto
monopolist. The influence it has over individual and global affairs is unique in human
history. 

There are some claims that Facebook demonstrate the following:

 Elevates disinformation campaigns and conspiracy theories from the extremist


fringes into the mainstream, fostering, among other effects, the resurgent anti-
vaccination movement, broad-based questioning of basic public health measures
in response to COVID-19, and the proliferation of the Big Lie of 2020—that the
presidential election was stolen through voter fraud.
 Empowers bullies of every size, from cyber-bullying in schools, to dictators who
use the platform to spread disinformation, censor their critics, perpetuate
violence, and instigate genocide;
 Defrauds both advertisers and newsrooms, systematically and globally, with
falsified video engagement and user activity statistics;
 Reflects an apparent political agenda espoused by a small core of corporate
leaders, who actively impede or overrule the adoption of good governance;
 Brandishes its monopolistic power to preserve a social media landscape absent
meaningful regulatory oversight, privacy protections, safety measures, or
corporate citizenship; and
 Disrupts intellectual and civil discourse, at scale and by design.

Questions:

1. State the problem. What is going on in external environment? What


problems faced the management?

Statement of the Problem

Facebook is a social media platform that enables people to connect and share with
each other. It is also a great source to stay updated with the latest information. It's one
of the most used platforms to stay up to date with the latest news. Facebook is adding
message filters to its Messages feature to help prevent the discovery of unlikable posts.

However, others agree that Facebook becomes unethical. Through "filter bubbles"—
social media algorithms that increase engagement and create echo chambers where
inflammatory content is given the most visibility. Facebook profits from radicalism,
bullying, hate speech, disinformation, fake theories and verbal violence that are spread
on the Facebook. Political extremism, malicious disinformation, and false stories have
crept into mainstream politics and manifested as deadly, real-world violence due to
Facebook's failure to control them. Thus, because of numbers of people on Facebook
doing unethical behaviors, these actions have far-reaching consequences for countries,
governments, the global population, and every corporation.

Facebook has become the legally recognized online medium for communication and
social engagement in many regions of the world. The primary platform passed the 2
billion monthly active user mark in 2017, and global user growth has continued since
then, with 2.6 billion expected in April 2020. Furthermore, Facebook has become an
important component for preserving social interactions in many nations.
2. What courses of action were recommended? What decision to be made?
Who are responsible in making them?

1. Eliminating violences
Blocking and deleting fraudulent accounts, detecting and eliminating malicious
actions, restricting the spread of false news and disinformation, and providing
unbelievable transparency to political advertising are some Facebook violence’s that are
needed to eradicate. Facebook must also enhance its machine learning skills, which will
allow it to be more successful in detecting and eliminating illegal activity. While skilled
investigators manually discover increasingly complicated networks, these technological
advancements assist better identify and stop illegal behavior.

As a result, they will be able to make significant progress. Facebook should also
focus on deleting millions (or double) of false accounts every day, preventing them
from engaging in some kind of coordinated information operations that are commonly
used to influence voters. Creating a fact-checker rates anything as false that will help to
limit false-news. While political meddling is a common strategy, others will utilize fake
news for a variety of purposes, including making money by fooling people into clicking
on something.
2. Responsively focus on the use of AI-based technology
Facebook must be more responsive in its use of AI-based technologies to filter out
hate speech, calls to violence, bullying, and disinformation that endangers public safety
or the democratic process' integrity. Eli Pariser invented the phrase "filter bubble" and
authored a book about how social media algorithms are designed to enhance
engagement while also creating echo chambers. Filter bubbles aren't only an
algorithmic result; they often filter our own lives by associating with individuals (both
online and offline) who share our philosophical, religious, and political beliefs.

According to "a former Facebook AI researcher," they conducted "'study after


study' proving the same fundamental idea: algorithms that optimize engagement
promote polarization." Not only did Facebook realize this, but they proceeded to
develop and grow its recommendation algorithms with the goal of increasing user
engagement, even though this meant optimizing for extremism and polarization.
Facebook must also focus on integrating AI-based technology into an ethical
foundation so that these issues may be handled via checklists, gradual advancements,
minor adjustments, or even reducing deep learning networks.

3. Concentrate more on giving correct information


Because of Facebook's planned strategy, technological solutions have failed to limit
the spread of harmful information. The algorithmic outcomes would be considerably
different if Facebook management's business strategy focused on efficiently providing
correct information and varied viewpoints, rather than engaging users to entertaining
and stimulating material within an information bubble.

3. What will hinder the company from achieving its objectives?


Given Facebook's dominance as the world's largest social networking site,
however, the ethical, legal, and social ramifications of its closure would be
significantly more serious than these examples. The ethical landscape in which
legal and policy measures to handle Facebook's closure must be constructed is
occupied by the stakeholders as well as the harms to which they are exposed.
When people are become irresponsible with their actions, this will be hard for the
Facebook to achieve its objectives. The greater data that they are collecting, the
more they exposing it, and making a false issue. Existing data protection rules are
insufficient to shield Facebook users from the ethical concerns of data
processing .As a result, users risk ethical harm if their data is used against their
consent, thereby compromising their privacy, dignity and sense of self-identity.

If you are the consultant in this case study, what other recommendations
will you add to ensure that problems faced by the company will be solved?

1. Strict implementation in creating a violence law; new privacy rules


Criminality must, without even a doubt, be prosecuted. Similar laws should be amended
all across the world to provide far more significant accountability and responsibility for
the spread of disinformation, violence, and extremism. People who commit acts of
violence should be held legally liable for the consequences of their actions. Facebook
needs to lay a solid security basis and develop new privacy features.
2. Practice of Transparency
Facebook must adopt transparency in order to make complete data on their
recommendation and filtering algorithms, as well as their other AI implementations,
available. Such research issues are impossible to answer without more transparency
and data availability. Any platform that has such a large influence on people's lives must
be examined in order to fully comprehend its impact. As a result, Facebook must make
the type of data necessary for a thorough investigation of the platform public.
3. Focused on increasing growth and user management.
Facebook's fundamental strategic plan is around growing user engagement and
growth. Its algorithms are quite good at what they do. Facebook's actions, such as
developing "AI filters" and collaborating with independent fact checkers, are largely
superficial and ineffective. They can't even begin to tackle the systemic flaws at the
root of the problem, because these are Facebook's central purpose.   However, the
adjustments that must be made go well beyond the successful use of AI. Facebook will
not change because it does not want to and is not incentivized to do so. Facebook's
leadership system must be eliminated, and Facebook's leadership structure must be
monitored.

You might also like