0% found this document useful (0 votes)
332 views33 pages

Group Assignment Full Report - Meta FB

The document is a written report submitted by a group of MBA students for their Organizational Management course. It discusses a workplace dilemma at Meta (Facebook) regarding their AI and machine learning algorithms. The report is assessed based on analytical skills, writing skills, knowledge skills, and includes a presentation rubric.

Uploaded by

ashraf azuas
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
332 views33 pages

Group Assignment Full Report - Meta FB

The document is a written report submitted by a group of MBA students for their Organizational Management course. It discusses a workplace dilemma at Meta (Facebook) regarding their AI and machine learning algorithms. The report is assessed based on analytical skills, writing skills, knowledge skills, and includes a presentation rubric.

Uploaded by

ashraf azuas
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 33

1

MASTER OF BUSINESS ADMINISTRATION PROGRAM

ZCMA6112 ORGANIZATIONAL MANAGEMENT


SET 8 SEMESTER 2, 2022/2023

META PLATFORM INC. (FACEBOOOK)


WORKPLACE DILEMMA – AI & MACHINE LEARNING ALGORITHM

BY GROUP 1 – LAW WU JIA

NO. NAME ID NO.


1 ASHRAF AZUAS P131279
2 NURUL SHAZALINA ZAINUDIN P131367
3 LAW WU JIA P131100
4 CARMEN CHAI WAN YIN P131045
5 KUA TIONG YEE P131374
6 LAW JUN YI P131047

SUBMITTED TO
DR. UMMU AJIRAH ABDUL RAUF
2

WRITTEN REPORT RUBRICS


RUBRI
Criter Marks CS Weighted
ia Allocated Marks
EXTREMELY POO MODER GO EXCELL
Obtained
POOR (1- R (3- ATE OD ENT
2) 4) (5-6) (7-8) (9-10)

Not able to discuss Minimal ability to Some ability to Able to discuss the Able to discuss
ANALYTICAL 10 marks the given task discuss the given discuss the given task with good
SKILLS task given task illustration

WRITING SKILLS The content was The content was The content was The content was The content was
20 marks not relevant to the minimally relevant to generally relevant to relevant to the very relevant to
a) Content
given task the given task the given task given task the given task

The assignment The organization of The organization


b) Organization was poorly the paper was of the paper was The organization of The paper was
10 marks organized and somewhat generally the paper was well very well
lacked organized with acceptable with organized and organized and
supporting minimal supporting some supporting supported supported
evidence evidence evidence
c) Grammar
Mechanics Usage & Too many Numerous Several grammatical Few grammatical No grammatical
Spelling 10 marks
grammatica grammatica errors errors errors
l errors l errors

Student does not Student demonstrates Student Student Student


KNOWLEDGE 50 marks demonstrate the some grasp of the demonstrates demonstrates demonstrates
SKILLS subject knowledge. subject knowledge. moderate level of sufficient level of sound subject
the subject the subject knowledge
knowledge. knowledge
3

OVERALL ASSESSMENT (100 MARKS) – 20%

PRESENTATION RUBRICS
Area/Activity 4 3 2 1
Visual Appeal There are no errors in There are some errors in There are many errors in There are many errors in
(4M) spelling, grammar, spelling, grammar, and spelling, grammar, and spelling, grammar, and
and punctuation. punctuation. Too much punctuation. Too much punctuation. The slides were
Information is clear and information on two or information was contained difficult to read and too much
concise on each slide. more slides. on many slides. information had been copied
Visually Significant visual appeal. Minimal effort made to onto them. No visual appeal.
appealing/engaging.
make slides appealing or
too much going on.
Extensive knowledge of Most showed a good Few members showed good Presenters did not understand
Comprehension topic. Members showed understanding of the topic. understanding of some parts of the topic. Most questions
(4M) complete understanding All members can answer the topic. Only some members were answered by only one
of the assignment. most of the audience's accurately answered questions. member, or most of the
Accurately answered all questions. information was incorrect.
questions posed.
Presentation Skills Regular/constant eye Most members spoke to Members focused on only Minimal eye contact by
(4M) contact, the audience was most of the audience; part of the audience. more than one member
engaged, and presenters steady eye contact. Sporadic eye contact by focusing on small part of
held the audience’s The audience was more than one presenter. audience.
attention. engaged by the The audience was distracted. The audience was not engaged.
Appropriate speaking presentation. Speakers could be heard by Most presenters spoke too
volume & body language. Most presenters spoke at only half of the audience. quickly or quietly making it
a suitable volume. Body language was distracting. difficult to understand.
Some fidgeting by Inappropriate/disinterested
member(s). body language.
4

Content (4M) The presentation was a The presentation was The presentation was The presentation was a brief
concise summary of the a good summary of informative, but several look at the topic, but many
topic with all questions the topic. elements went unanswered. questions were left
answered. Comprehensive Most valuable information Much of the information is unanswered.
and complete coverage of covered; little irrelevant irrelevant; coverage of some Most of the information is
information. info. of major points. irrelevant and significant
points left out.
All presenters knew the Slight domination of one Significant controlling by Unbalanced presentation
information, participated presenter. Members helped some members with one or tension resulting from
Preparedness/Particip equally, and helped each each other. minimally contributing. over-helping.
ation/ Group other as needed. Very well prepared. Primarily prepared but with Multiple group
Dynamics (4M) Extremely prepared and some dependence on just members not
rehearsed.
reading off slides. participating.
Evident lack of
preparation/rehearsal.
Dependence on slides.
5

PRESENTATION
INDIVIDUAL MARKS
1.
NAME: NURUL SHAZALINA ZAINUDIN
ID NUMBER: P131367
PRESENTATION 20M 5%
MARKS
Visual Appeal Comprehension Presentation Skills Content (4M) Preparedness/Participation/
(4M) (4M) Group Dynamics
(4M) (4M)

2.
NAME: LAW WU JIA
ID NUMBER: P131100
PRESENTATION 20M 5%
MARKS
Visual Appeal Comprehension Presentation Skills Content (4M) Preparedness/Participation/
(4M) (4M) Group Dynamics
(4M) (4M)

3.
NAME: LAW JUN YI
ID NUMBER: P131047
PRESENTATION 20M 5%
MARKS
Visual Appeal Comprehension Presentation Skills Content (4M) Preparedness/Participation/
(4M) (4M) Group Dynamics
(4M) (4M)

4.
NAME: CARMEN CHAI WAN YIN
ID NUMBER: P131045
PRESENTATION 20M 5%
MARKS
6

Visual Appeal Comprehension Presentation Skills Content (4M) Preparedness/Participation/


(4M) (4M) Group Dynamics
(4M) (4M)

5.
NAME: KUA TIONG YEE
ID NUMBER: P131374
PRESENTATION 20M 5%
MARKS
Visual Appeal Comprehension Presentation Skills Content (4M) Preparedness/Participation/
(4M) (4M) Group Dynamics
(4M) (4M)

6.
NAME: ASHRAF AZUAS
ID NUMBER: P131279
PRESENTATION 20M 5%
MARKS
Visual Appeal Comprehension Presentation Skills Content (4M) Preparedness/Participation/
(4M) (4M) Group Dynamics
(4M) (4M)
7

STUDENTS BIOGRAPHY
STUDENTS BIOGRAPHY
STUDENTS BIOGRAPHY STUDENTS PHOTO
Hi, I am Ashraf bin Azuas, a Senior
Aeronautical Engineer (BEM GE148636A),
working in aircraft MRO and Engineering
Design companies for almost 13 years since
graduation from USM in Bachelor
Engineering Aerospace back in 2010. I have
specialized engineering skills and various
experiences in operation management. I did
my Master Engineering Manufacturing
System UKM back in 2021 to understand
further the process in the production of
aerospace parts and leverage my skills and
knowledge with it. I am now doing my
MBA in UKM to learn about organization
management and to further my knowledge
in business study. With this knowledge, I
will be ready to venture by contributing the
best of myself in my current or future
company and my businesses.
Hi, my name is Nurul Shazalina Zainudin. I
hold a PhD in parasitology from the UKM
Faculty of Medicine, as well as an MSc in
molecular medicine and a BSc in
biotechnology, all from USM. As a trained
auditor, I currently holds the position of
Quality Manager at a pharmaceutical
company where I assures adherence to ISO
9001:2015 standards. I am also a Certified
Halal Executive. My background includes
working as a Scientific Manager in the
medical device sector. I have successfully
started and runs six early education
branches outside of the corporate world.
8

STUDENTS BIOGRAPHY STUDENTS PHOTO


Nice to meet you. I am Law Wu Jia, a senior
pharmacist working in Hospital Kuala
Lumpur, graduated with a Bachelor of
Pharmacy (Hons). I have 9 years of
experience working in different general
hospitals, in different states. Throughout the
years, I have managed different portfolios,
covering not just a daily pharmacist work, I
also venture into mentoring, leading a team,
dealing with drug application and indent. As
my seniority gets higher, I start to gain
interest in knowing more about
management. I came to a point that I crave
for more growth and wish to explore more
possibilities in my career pathway. I believe
growth starts with the courage to change.
And change starts with the willingness to
learn. With the pursuit of an MBA, I hope to
grow better as a person, and inspire those
whom I work with or lead in the future.
Greetings to everyone. I am Carmen Chai
Wan Yin. I hold a Bachelor’s (Hons) degree
in Social Science Psychology. Following
graduation, I embarked on a journey as a
therapist for children with Autism, honing
my clinical psychology skills for one and a
half years. Subsequently, I transitioned my
career path to become a Project
Development Executive. Within this role, I
acquired invaluable insights into the end-to-
end process of executing sales projects –
from conceptualization and planning to
impactful launch and successful sales
attainment. Notably, my current
employment within a prominent US-listed
company has significantly exposed me to
the experience of fundraising projects.
Driven by a passion for continuous growth,
I am resolute in expanding my expertise in
this domain and this fuels my decision to
pursue an MBA.
9

STUDENTS BIOGRAPHY STUDENTS PHOTO


Hello, I’m Kua Tiong Yee, a dedicated
student of business administration. I hold a
Bachelor of Arts in Business Administration
from the Coventry University Dual Award
Program, a Bachelor of Business
Administration (RBU) from Tunku Abdul
Rahman University College, and a Diploma
in Business Studies (Business
Administration). My education has
equipped me with strong interpersonal
skills, effective communication abilities,
and a goal-oriented mindset. I have gained
valuable working experience as a Sales
Executive at NTN Bearing Malaysia Sdn
Bhd. In this role, I successfully managed
various projects across diverse industries,
specializing in OEM sales, after-market
sales, and project sales. I’m excited to
further contribute my skills and knowledge
to the business world.
Hi, I am Law Jun Yi and I hold a Bachelor
Degree in Management (Major in Finance
and Minor in Psychology) from Universiti
Sains Malaysia (USM). I started my career
in the banking sector for 2.5 years before
making a switch to the HR profession,
specializing in Learning and Development.
Currently working as the Senior Executive
of Group Learning and Development in
Kossan, I act as the team leader for 2 of my
executives, overseeing the management and
execution of training programs. Being a
HRDCorp certified Train-The-Trainer, I
also serve as a corporate trainer, designing
and conducting certain training programs
such as Practical Excel, 5S and lean
management, Basic Glove Manufacturing
Process, Train-the-Trainer Crash Course etc.
After experiencing working in 2 different
professional fields, I would like to further
expand my exposure to the business world
by taking the MBA course.
10

ACKNOWLEDGEMENT

This collaborative effort is the result of the team members' excellent idea contributions. We
sincerely appreciate all of the team members' help and understanding in getting this work
done, as well as the support from their families and friends.

We also would like to express our gratitude to the previous lecturers who inspired us and
shared their insights and experiences, which sparked the growth of the information for this
group project.
11

TABLE OF CONTENTS

1.0 INTRODUCTION ....................................................................................................... 12


1.1 History of Facebook owned by META............................................................... 12

1.2 Workplace dilemma in Facebook ....................................................................... 13

1.3 Purpose of report ................................................................................................. 14

2.0 CAUSES AND IMPACTS .......................................................................................... 15


2.1 Causes ............................................................................................................................ 15

2.1.1 Obsession of the organization with growth and profit over safety .................. 15

2.1.2 Unethical handling of user’ s data privacy ........................................................ 16

2.1.3 Frequent data leak sparks controversy and disappointment .......................... 16

2.1.4 AI Algorithm amplification can go wrong ......................................................... 16

2.1.5 Poor content moderation system remains a problem ....................................... 17

2.2 Impacts .......................................................................................................................... 17

2.2.1 Cybercriminal ...................................................................................................... 17

2.2.3 Antitrust concern ................................................................................................. 18

2.2.4 Political advertising and manipulation .............................................................. 18

2.2.5 Negative mental health impact ........................................................................... 18

2.2.6 Spread of misinformation.................................................................................... 19

2.2.7 Threat on human safety leading to death .......................................................... 19

3.0 INNOVATIVE SOLUTION ....................................................................................... 19


4.0 CONCLUSION ............................................................................................................ 25
12

1.0 INTRODUCTION

Few organizations have had as much impact and influence in the constantly changing
world of social media and technology as Facebook. It was initially created as a platform to
link people around the world, but it swiftly developed into a digital powerhouse that not only
revolutionized communication, but also influenced the entire nature of 2+ billion human
interactions across the world (Srinivasan, 2019). Facebook has unquestionably left a lasting
influence on contemporary society from its modest origins in a college dorm to its current
position as a worldwide giant. However, such power also brings a variety of difficulties, some
of which have created serious moral conundrums and brought the firm into the public eye for
all the wrong reasons.

1.1 History of Facebook owned by META

Facebook is an online social media and networking service owned by Meta Platform
Inc, an American multinational technology conglomerate based in Menlo Park, California.
Facebook is a result of the creation of Facemash website which was first introduced in 2003
by Mark Zuckerberg while attending Harvard University (Cruz & Dias, 2020). In his early
year, Zuckerberg created Facemash for fun with the objective to compare two pictures
obtained from Houses data to see who is the hottest. Upon launch, the website become
sensational as it attracts 450 visitors and 22,000 photo views in the first 4 hours. Concern on
the network traffic and violation of copyrights, security, and privacy, the website is shut down
immediately (Cruz & Dias, 2020).

Inspired by Crimson editorial on his own Facemash idea, Zuckerberg and Eduardo
Saverin invested $1000 each to create the “thefacebook.com” in 4 February 2004 (Brugger,
2015). The membership was initially limited to Harvard students with at least half of
undergraduate registered. By getting good support, the team, consisting of Zuckerberg and
Eduardo along with new team members Dustin Moskovitz, Andrew McCollum, and Chris
Hughes helped to grow the website (Brugger, 2015). In March 2004, Facebook expanded to
other universities including Colombia, Yale, Stanford, and all Ivy League colleges, Boston
University, NYU, MIT, and most universities in US and Canada (Schneider, 2004). With the
help from investors such as Peter Thiel, Accel Partners, and Jim Breyer, the facebook dropped
its “the” and became facebook.com as known today in May 2005 and accessible to all
(Cassidy, 2006).
13

In an ambiguous gesture, Facebook changed its name to Meta in October 2021,


heralding the dawn of a new era of social interaction made possible by the metaverse
technology, which is predicted to take the lead as the future focal point for online social
interactions (Kraus et al, 2022).

1.2 Workplace dilemma in Meta Facebook

A situation where a decision-maker must pick between two or more options that are
about equal in moral importance or ethical value is known as a dilemma (Kvalnes, 2019).
Michael Rion stated that “Ethical dilemmas often arise as the unintended consequences of
well-intentioned actions, not from unethical motives.” (Anupama & Kumari, 2014). Facebook
has unquestionably left a lasting influence on contemporary society from its modest origins in
a college dorm to its current position as a worldwide giant. However, such power also brings
a variety of difficulties, some of which have created serious moral inconsistencies and
brought the firm into the public eye for all the wrong reasons.

In Senate Committee 2005, a Facebook whistleblower; Frances Haugen, shares that


its’ algorithms are dangerous. Algorithms have received more attention in recent years as they
influence more and more areas of public life. Algorithms are abstract approaches that are
implemented in software programs and used on the Internet to convert input into output by
means of preset computing processes (throughput). Many of these programs have been
created to manage the enormous amount of data and information that is available online
(input). As a result, they filter and prioritize data, choose information, and organize it
(throughput). The output could be utilized for purposes like rankings, recommendations, price
setting, or text, among others (Saurwein & Smith, 2021). It is challenging for users, such as
professional communicators, and end users, such as citizens of a community, to understand
how algorithms work and what data influences their judgments because algorithms are hidden
in a great deal of business secret (Zuboff, 2015). Due to the fact that they are run by huge
firms and platform owners who rely on the algorithms for their financial success, the
algorithms also change regularly and frequently without warning. According to O'Neil (2017),
the platform owners' economic interests are reinforced by the rapid scaling of material and
relationships and "hidden" goals built into the algorithms that control user behavior.

Facebook began to modify its algorithms so that only the most relevant posts were
seen in a user's feed as the social network (and the Internet overall) became larger and there
14

simply became too much information for one user to understand. Since that time, Facebook
algorithms have placed a high value on sorting and presentation (Silverman and Huang,
2017). With a focus on postings about family and friends rather than commercial
relationships, Facebook algorithms are designed to adjust the order and presentation of posts
so that users only see what Facebook believes to be the most relevant posts for them. Since
2018 (Mosseri, 2018), this concentration has grown even more intense, which could provide
challenges for organizations while also heightening public perception of Facebook's
significance. As a result, businesses must produce even more genuine postings and active
involvement. They must, in other words, be "friends" with their clients. The change only
means that the algorithms favor quality material, based on genuine connection and
meaningful contacts. It does not mean that businesses are no longer promoted at all in the
users' news feeds (Rowland, 2010).

One of the major problems in this algorithm issue, is there isn't a single Facebook
team in control of the entire content-ranking mechanism. Depending on the goals of their
team, engineers create and incorporate their own machine-learning models into the mix. For
instance, teams tasked with deleting or demoting offensive content, referred to as integrity
teams, will only train models for identifying various forms of offensive content. As part of its
early "move fast and break things" principles, Facebook made this choice. It created an
internal tool called FBLearner Flow that enabled engineers without prior machine learning
experience to create any models they required. According to one data point, more over 25% of
Facebook's engineering team was already utilizing it in 2016 (Hao, 2021).

In an interview, Francesca Hauger said that it was discovered that people were more
likely to feel angry than other emotions when using the Facebook algorithm, which is used to
display the appropriate information in a user's News Feed. Haugen claims that Facebook has
been aware of this through its own research, but it also understands that if the algorithm were
changed to make users feel safer, this would cause users to spend less time on the site, which
would lead to less clicks and, ultimately, less revenue (Khanna, 2021).

1.3 Purpose of report

This report's main goal is to investigate comprehensively the potential hazards created
by the algorithms used by the META (previously Facebook) platform. It attempts to
investigate the underlying factors that fuel the possible risks connected to these algorithms,
15

revealing their complex workings and the effects they have on users and society at large. This
study looks into the potential drawbacks of algorithmic decisions in an effort to highlight how
urgent it is to address these problems. The report will also look at innovative solutions that
have been developed as safeguards against the dangers of algorithmic decision-making on
META.

2.0 CAUSES AND IMPACTS

There has been workplace dilemma for Meta Facebook’s employee being discontent
with the unethical workplace practices going on in the organization with data privacy and the
fact that profit is being prioritized over safety. The struggle of unethical workplace practices
uncovered with the Cambridge Analytica (CA) scandal, the largest known data leak in
Facebook history (Confessore, 2018). Later, many employees including ex-employee of
Facebook, Frances Haugen whistle blow on the unethical workplace practices of Facebook.
The world gained the fact that Facebook become increasingly leaky and many of the
employees has become discontent over the situation (Mac & Kang, 2021).

2.1 Causes

2.1.1 Obsession of the organization with growth and profit over safety

The obsession of the organization with growth and profit over safety that makes not
only the employees but the public questioned the credibility of the organization. A
controversial internal Facebook memo with the title of “The Ugly” featuring the Facebook
Vice President, Andrew Bosworth’s statement on the aggressiveness in connecting people. In
the internal memo, Bosworth stressed on expressed his belief in grow and connect people at
all cost even if someone dies from suicide or it leads to terrorist attacks. The memo had
showed the executive forced the employee to embrace Meta Facebook’s growth strategy
without consider the consequences and the dark side of its action made its ethics questionable
(Friedersdorf, 2018).
16

2.1.2 Unethical handling of user’ s data privacy

Unethical tactics are inevitable in the organization and the employees can only choose
to either take it or leave it, which sparked discontentment. This is due to the way Meta
Facebook made its revenue via advertisement which target its users. While the Facebook
Chief Executive Officer (CEO), Mark Zuckerberg have consistently make statement to
reassure the safety of its platform, leaked company documents showed that Mark Zuckerberg
along with his board and management team leveraged the user data to the companies it
partnered while cutting the access of its data to its potential competitors (Solon & Farivar,
2019).

2.1.3 Frequent data leak sparks controversy and disappointment

Data leakage is a continuous issue throughout the years, it sparks controversy over
platform safety in 2018 CA scandal. CA ex-employee, Christopher Wylie whistle blow on CA
improperly acquired Facebook private data to build and sell psychological profiles to political
campaigns without users’ consent. It was found that employees tried to alert Facebook as
early as September 2015, and has asked CA to delete it but failed to follow up. The scandal
has led to disappointed Facebook engineers quit or request to shift to other divisions such as
Whatsapp or Instagram over ethical concern (Bhardwaj, 2018). Later in 2021, Facebook
experience data leak again involving 533 million Facebook users to have their personal data
being published on a low-level hacking forum to be viewed for free. Published personal data
including phone numbers, Facebook IDs, locations, birth dates, email addresses etc. The leak
is due to Facebook hastily manage its vulnerability of its platform allowing data being
scraped. (Holmes, 2021).

2.1.4 AI Algorithm amplification can go wrong

AI algorithms on the Meta Facebook platform functioned based on Machine Learning


with two functions. The first function helps to implement targeted advertisement and rank
content based on capturing the data of history interaction on Facebook, which causes certain
content that the user favours appear more frequently on news feed. The second function helps
to detect bad content such as nudity, spam, and pushes content down the newsfeed. Although
it might seem perfect, there is no one team in charge in content-ranking algorithm as different
engineering team gets to develop and add their own mixture of machine-learning models into
17

the algorithms, based on their teams’ objective. This part of the Facebook “move fast, break
things” culture but it gone so complex. As a consequence, no one can keep track and control
the type of content that is served to the users’ newsfeed, which leads the integrity and security
issues of the algorithms (Hao, 2021). The AI algorithm might end up with negative content
amplification such as those with misinformation, hatred speech, extreme negative viewpoints,
polarization content and even conspiracy.

2.1.5 Poor content moderation system remains a problem

Poor and insufficient content moderation system coupled with risk of negative content
amplification by AI algorithm raise the concern of employee about the dark side of the
platform. Despite content moderation can be done by automation or employees, there is no
limit to the amount of content can be posted on the platform and the system might rely on the
reporting by the users, which is inconsistent. Besides, it is difficult to balance mental health of
the employees who constantly exposed to harmful content and the limitation of automation in
linguistic and cultural competencies (Stakepole, 2022). It became a problem for over 180
sacked Sama employees, which is subcontracted content moderators seeking compensation
due to mental health damage and “unlawful” dismissal (Valmary, 2023).

2.2 Impacts

2.2.1 Cybercriminal

A platform with frequent data leak can cause the users prone to cybercriminals, such
as phising, scam, hacking. With a little amount of private information is usually enough for
bad actors to perform hacking, or impersonate people as well as commit fraud (Holmes,
2021). Earlier this year, Meta Facebook warned users as they found malicious software
claimed to offer ChatGPT-based tools for the Facebook user. Malware operators tried to trick
users into clicking malicious link or software which ultimately given a chance for them to
break into the phone device (Pedler, 2023).
18

2.2.3 Antitrust concern

Meta Facebook obsession over profit resulted into their unethical tactic to crush
competitors with illegal buy-and-bury scheme. In the effort to stay as the monopolist of its
field, Facebook uses its open space platform as a bait to success for third party software
developers, and bury successful apps that seemingly potential competitors. This is not only
against the antitrust law, the monopolist also prevents competition for better innovation and
limit user’s choice (Federal Trade Commission, 2021).

2.2.4 Political advertising and manipulation

As Meta Facebook has to ability to remove certain content, there are lack of standards
for a fair censorship, which its decision might lead to political manipulation. For instance, the
action of selective censorship of propaganda by the anti-government group in Vietnam as
election approaches. Although the CEO of the company, Mark Zukerberg, champions
freedom of speech, his action in Vietnam contradicts to what he claimed to be. During
Vietnam’s party congress, Mark chooses to comply with the ruling Communist party to the
censorship of anti-government post to avoid being knocked offline, losing annual revenue of
USD 1 billion. This not only contradicts with the freedom of speech practice by the company,
it also allows nearly total control of the social media platform by the ruling party (Dwoskin,
2021).

2.2.5 Negative mental health impact

A leak suggest that Meta Facebook is aware of the harmful effect of Instagram on the
mental health of teenage girls by creating negative body image issue but have kept this news
for 2 years. According to the leaked internal research, 32% of teenage girl users says that
when they feel bad about their bodies, Instagram made them feel worse. This contributes to
rise in anxiety, depression and ultimately resulting in having suicidal thoughts. This can
further be proven by a finding showing that 13% of UK and 6% of US ‘s suicidal thoughts
can be traced back to the social media apps. Other source also cites that 40% of Instagram
users who reported feeling “unattractive”, admitted the feeling began from the apps (Gayle,
2021). As Facebook AI algorithm amplifies the content that is frequently visit by the users,
those users with existing negative view and low self-esteem, will only be fed with more
19

extreme content based on their engagement with the social media. This ultimately resulting in
the deterioration of mental health of certain users (Hao, 2021).

2.2.6 Spread of misinformation

There are a numbers of misinformation due to Meta Facebook’s algorithm and its
group feature has found to aid extremists and activists to gain audiences and support such as
from QAnon conspiracy theorists, anti-vaccination activists. QAnon theory conspiracy started
as an unfounded theory that President Trump is fighting against Satan-worshipping
paedophiles, which President Trump play along with the misinformation and being viewed as
Hero among the believers (Sen, 2020). Besides that, COVID-19 denier and anti-vaccination
activists also spread fake news about COVID-19 is a scam or hoax bring about by the
government, which causes difficulties in curbing COVID-19 (Gallagher, 2021).

2.2.7 Threat on human safety leading to death

The security issues of Meta Facebook causes spread of hatred towards a person or a
group of people, which pose a threat on human safety. At some point, AI amplification of
such post, coupled with failure of Facebook to remove it has contributed to death of ethnic
minorities in the war. In one lawsuit, a plaintiff named Abrham Meareg, also an ethnic
minority, alleged that Facebook refuse to remove hateful post towards his father, a respected
Chemistry professor in Bahir Dar University in the Amhara region of Ethiopia despite he has
reported those post many times. As a result, his father was then followed and shot to death
(Perrigo, 2022).

3.0 INNOVATIVE SOLUTION

To address the dilemma Meta Facebook's employees face, who struggle with their
dissatisfaction with the organization's unethical practices prioritizing data privacy and profits
over safety, Meta Facebook should introduce an effective whistle-blowing system. Such a
system would offer a secure place for employees and stakeholders to report any instances of
unethical behaviour or violations of company policies (Transparency International, 2022). By
implementing this whistle-blowing system, Meta Facebook can encourage transparency and
ensure the protection and anonymity of those who blow the whistle.
20

To materialize this solution, Meta Facebook can develop comprehensive whistle-


blower software that unifies various applications and workflows into a single platform. This
platform would cater to compliance, legal, and human resources professionals (TrustRadius,
2023). The software should boast user-friendly functionality, enabling easy accessibility by
various groups of individuals while encompassing all necessary standard policies and
procedures professionals (TrustRadius, 2023).

A paramount consideration in this endeavour is safeguarding data privacy within the


whistle-blowing system. End-to-end encryption (E2EE) should be incorporated into the very
fabric of the whistle-blowing software, assuring an elevated standard of data privacy across
the communication platform (WhistleBlower Software, n.d.). By embracing these measures,
Meta Facebook can effectively address the ethical dilemmas faced by its employees and
cultivate an environment that prioritizes integrity, transparency, and protection.

Next, to solve the above dilemma, Meta Facebook has proactively implemented
Privacy Enhancing Technologies (PETs) to fortify its data protection measures in response to
potential data breaches on all the platforms, including Facebook, Instagram and WhatsApp.
By employing PETs, Meta strategically reduces the extent of data processing, effectively
safeguarding personal information from vulnerabilities. These advanced PETs serve a dual
purpose: bolstering data security and optimizing digital advertising for marketers to seek
efficient solutions (Facebook, 2021).

While Meta Facebook's role as a social connectivity hub is well-known for its
significant involvement in social advertising, social advertising is linked to a substantial
portion of Meta's revenue stemming from advertising. The company operates an expansive
database encompassing user profiles, personal details, IDs, and passwords. Therefore, it is
Meta's responsibility to prevent the potential data breach.

In social advertising, Meta Facebook's implementation of PETs facilitates private


cross-platform operations, seamless customer matching, and precise measurement between
advertisers and publishers (Facebook, 2021). The application of PETs by Meta provides a
spectrum of solutions contributing to ad personalization and meticulous measurement
strategies.
21

The primary method employed in Privacy Enhancing Technologies (PETs) is Secure


Multi-Party Computation (MPC). Through this approach, it becomes possible for multiple
entities to collaborate while simultaneously restricting their access to sensitive data or
information (Deloitte, n.d.). Implementing this technique establishes an extensive shield,
effectively preventing unauthorized data access and safeguarding customer information from
breaches. Comprehensive end-to-end encryption is maintained throughout data transmission,
storage, and usage, ensuring neither party gains visibility into the other's data (Deloitte, n.d.).
This dual function also serves as a firewall against potential hackers. Furthermore, MPC
proves invaluable in protecting privacy during the reporting calculations, such as sharing the
outcomes of an advertising campaign involving multiple stakeholders (Deloitte, n.d.).

The subsequent approach utilized within Privacy-Enhancing Technologies (PETs) is


On-device Learning. This methodology entails training algorithms to deliver pertinent
advertisements to individuals (Deloitte, n.d.). Machine learning principles underpin this
system. By leveraging this technology, algorithms gain the capacity to record people's
preferences, internalize patterns, and consequently tailor content presentation based on their
favouritism across applications and websites (Deloitte, n.d.).

On a different note, applying AI Content filtering becomes necessary. This AI-driven


tool can discern objectionable and inappropriate content, encompassing elements such as
violence, nudity, and hate speech (Arnold, 2022). Its proactive approach ensures the
elimination of such materials, presenting exclusively those that align with the preferences of
the intended audience. The most effective strategy involves a harmonious blend of AI content
and human content moderation, with human moderators meticulously reviewing content
flagged by the AI system (Olga, 2023). AI content moderation serves as the initial layer of
filtering, complemented by the secondary layer provided by human content moderation. This
combination of AI and human content moderation finds widespread application, notably
within platforms like YouTube (Olga, 2023). Through these measures, AI operates under
diligent supervision, upholding ethical standards.

To address the concern of data leakage and bolster user data privacy, Meta Facebook
should strengthen its security measures by implementing enhanced Two-Factor
Authentication (2FA). Two-Factor Authentication, also referred to as two-step verification,
22

constitutes a robust security protocol where users must provide two different authentication
factors—such as a secure password and biometric information—to verify their identity
(Rosencrane & Loshin, n.d.). This approach serves as a formidable defence, ensuring that
even if data is compromised, users can still safeguard their personal information, as
unauthorized parties cannot meet the dual-factor authentication requirements for account
access (Rosencrane & Loshin, n.d.).

Introducing Two-Factor Authentication serves the dual purpose of strengthening user data
security and safeguarding the resources users can access. Adding an extra protection layer
will raise the difficulty for potential attackers attempting to breach online accounts. The
authentication process is divided into several factors (Rosencrane & Loshin, n.d.):
• Knowledge (password or PIN)
• Possession (ID card, mobile device, or smartphone)
• Biometrics (facial or fingerprint recognition)
• Context (location and time)

Attempting to breach the authentication requires a comprehensive fulfilment of these


factors. Notably, any suspicious activity, such as an attempt to log in from an unfamiliar
location or time, triggers GPS data verification and sends an alert to the account owner,
further strengthening security (Rosencrane & Loshin, n.d.).

Consequently, a would-be attacker aiming to bypass Two-Factor Authentication must be


able to navigate the multifaceted requirements outlined above. This layered approach
substantially enhances the protection of user data, fostering a safer digital environment for
Meta's users.

On the other side of the innovation solution, it is also recommended that Meta
Facebook should focus on managing its usage of Artificial Intelligence (AI) in its social
media application and practicing AI ethics. AI ethics is a system of moral principles and
techniques that guide the ethical development, deployment, and usage of AI technologies.
Simply implementing AI is insufficient as it is crucial that the utilization of AI aligns with
ethical principles.
23

According to Lawton and Wigmore (2023), it was suggested to apply AI ethics. AI has
become essential to most products and services, particularly the social media platform and
organizations are starting to develop their own AI codes of ethics.

The objective of an AI code of ethics is to give users guidance when faced with an
ethical dilemma in relation to the utilization of AI. The advancement of AI has brought about
significant ethical concerns that require a thoughtful and principled method for how it is
created and put into use. AI ethics encompasses a system of moral values and guidelines that
guide the responsible utilization of AI technology. While AI offers unparalleled opportunities,
its potential pitfalls and unintended consequences underscore the importance of adhering to
AI ethics. (Lawton & Wigmore, 2023)

The benefits of the AI code of ethics include aligning with customer-centric and
socially conscious trends, enhancing brand perception, and fostering positive societal impact.
Furthermore, it also creates a cohesive workplace environment by reassuring employees about
their company's values. (Lawton & Wigmore, 2023)

Moreover, an AI code of ethics can provide the motivation that encourages


appropriate behaviour. For example, Sudhir Jha, the senior vice president of Mastercard,
provided a few tenants to guide the development of his company’s current AI code of ethics:
An ethical AI system must encompass inclusivity, transparency, positive intent, and
responsible data usage. (Lawton & Wigmore, 2023)

The adoption of a robust AI code of ethics presents Meta Facebook with both a moral
imperative and a strategic opportunity. By prioritizing inclusivity, transparency, and
responsible data usage, Meta can gain user trust and align with customer-centric trends. Such
ethical considerations will guide decision-making, preventing misuse and ensuring AI
systems that adhere to human values. This commitment positions Meta at the forefront of
responsible AI innovation.

The next innovative solution is for Meta Facebook to hire an external and
independent regulator to audit its system. Saurwein & Spencer-Smith (2021), suggests the
implementation of external and independent auditing for the purpose of inspecting and
auditing content moderation system. It is recommended that an independent regulator should
be empowered and resourced to enforce platforms’ due diligence and transparency
obligations. However, the effectiveness of this governance framework is conditional upon the
transparency level between the platform’s independent regulator. The independent regulator
24

should have the power to demand any type of granular evidence that is necessary for it to
fulfil its supervisory tasks, and to impose fines or other corrective methods if information is
not provided in a timely manner.

Moreover, Saurwein & Spencer-Smith (2021) also highlighted that such regulations
shall mandate large online platforms to enhance the transparency of their algorithmic systems,
including but not limited to providing opportunities to opt out of profiling and
personalization, to protect services from manipulation, and to carry out risk assessments to
avoid the spread of illegal content, restrictions of fundamental rights, and manipulation.

An example can be seen in the recent introduction of the Digital Services Act by the
European Commission, though in its infancy, this initiative has signified a pivot towards
tighter oversight over internet platforms and their algorithm-driven functions. The proposal
advocates for external auditing and EU-level technical assistance to inspect content
moderation, recommender algorithms, and online advertising, with the aim of establishing
accountability and ensuring responsible digital conduct. (Saurwein & Spencer-Smith 2021)

With compulsory audit and inspection powers, a regulator is empowered to rectify the
information asymmetry that currently defines the public’s relationship with large technology
companies. These powers are essential for effective oversight and compliance: a regulator
would struggle to achieve its statutory goals without these powers in place.

The last innovation solution for Meta Facebook is to create strong foundations of
enterprise risk management aligned with leading practices and regulatory requirements. In
dealing with the ethical landscape of algorithmic systems, a well-rounded approach is
imperative, encompassing strategy, design, deployment, and monitoring. These are key steps
for the responsible management of algorithmic risks. (Managing Algorithmic Risks, n.d.)

Strategy and governance: create an algorithmic risk management strategy and


governance structure to handle technical and societal risks. This ought to include principles,
policies, and standards; roles and responsibilities; control processes and procedures; and
appropriate personnel recruitment and training. Providing transparency and establishing
procedures to manage inquiries can also aid organizations in the responsible use of
algorithms. (Managing Algorithmic Risks, n.d.)

Design, development, deployment, and use: create procedures and strategies that align
with the governance framework to handle the entire algorithm life cycle, spanning from data
25

selection to algorithm design, to integration, to actual live use in production. (Managing


Algorithmic Risks, n.d.)

Monitoring and testing: Create procedures to evaluate and supervise algorithmic data
inputs, operations, and outcomes, making use of cutting-edge tools as they become accessible.
Seek objective reviews of algorithms by internal and external parties. (Managing Algorithmic
Risks, n.d.)

4.0 CONCLUSION

In conclusion, the problems raised above emphasize the challenging ethical


environment Meta, previously Facebook, must navigate in its attempts for development and
influence in the digital era. These problems include unethical work practices, complicated
algorithmic structures, and the need for creative solutions.

First of all, the discovery of unethical working practices at Meta raises questions about
the organization's values, which put user privacy and safety at odds with development and
profit. The Cambridge Analytica crisis and the accompanying leaks highlight the moral
quandaries that workers face. The company's fixation with growth and Andrew Bosworth's
internal memo serve as examples of how financially motivated choices can have unethical
consequences. The unethical management of user data for targeted advertising additionally
jeopardizes user confidence and raises doubts about Meta's dedication to data protection.

Second, there are additional ethical difficulties brought on by the computational


complexity of Meta's activities. There are worries about false information, polarization, and
the amplifying of unpleasant content due to the complex AI algorithms that determine content
ranking and user experiences. The propagation of hazardous content and challenges to user
safety are caused by the lack of centralized control over content-ranking methods and
inadequate content moderation systems. The ethical consequences of these complexities are
brought to light by the effect AI algorithms have on mental health and the possibility for
political manipulation.

Innovative solutions that put transparency, user safety, and ethical technology use first
are needed to solve these problems. Employers can give staff members the freedom to report
unethical activity while protecting their privacy by implementing a thorough whistleblowing
mechanism. While adhering to AI ethics ensures ethical AI development and deployment,
26

privacy enhancing technologies (PETs) provide a technical solution to safeguard user data and
improve data protection methods.

Additionally, the establishment of an outside, independent body can bring in


monitoring to guarantee adherence to moral norms and data privacy. By taking this action,
platforms and users would have more responsibility and transparency, which would help close
the trust gap. Furthermore, implementing organization risk management that is in line with
best practices helps reduce the risks associated with algorithms. This includes strategy,
governance, design, implementation, and monitoring.

In the face of these difficulties, Meta has the chance to prove its dedication to moral
behavior and user satisfaction. Meta can pioneer the way for a digital ecosystem that values
integrity, user privacy, and responsible technology use by putting these cutting-edge ideas
into practice. Growing while maintaining moral standards is a difficult task, but the decisions
made today will have an impact on the organization's influence, reputation, and societal
impact for years to come. The decisions made by Meta will have an impact on the worldwide
technological and ethical landscape in this fast-changing digital environment.
27

REFERENCES

Anupama, G., Kumari, P. L. (2014). Study on Ethical Dilemmas of Managers at


Workplace.

Journal of Strategic Human Resource Management, Vol. 3(3).

Arnold, V. (2022). AI content filtering: What it is and how it works. NeuroFlash.

https://neuroflash.com/blog/ai-content-filtering/

Bhadwaj, P. (2018, April 9). Facebook employees are quitting or asking to switch

departments over ethical concerns, says report. Business Insider.


https://www.businessinsider.com/facebook-employees-quitting-whatsapp-instagram-
cambridge-analytica-report-2018-4

Brugger, N. (2015). A Brief History of Facebook as a Media Text: The Development of

an Empty Structure. First Monday.

Cassidy, J. (2006). How Hanging in the Internet Became Big Business. The New Yorker.

Confessore, N. (2018, April 4). Cambridge Analytica and Facebook: The Scandal and the

Fallout So Far. The New York Times.


https://www.nytimes.com/2018/04/04/us/politics/cambridge-analytica-scandal-
fallout.html

Cruz, B. S., & Dias, M. O. (2020). Does Digital Privacy Really Exist? When the

Consumer is the Product. Saudi Journal of Engineering and Technology, 5(2).


28

Deloitte. (n.d.). Privacy-enhancing technologies (PETs): A strategic approach. Deloitte.

https://www2.deloitte.com/content/dam/Deloitte/us/Documents/us-Meta-PETs-
Whitepaper.pdf

Deloitte. (n.d.). Managing algorithmic risks. Deloitte United States.

https://www2.deloitte.com/us/en/pages/risk/articles/algorithmic-machine-learning-
risk-management.html

Dwoskin, E., Newmyer, T., & Mahtani, S. (2021, October 25). The case against Mark

Zuckerberg: Insiders say Facebook’s CEO chose growth over safety. The Washington
Post. https://www.washingtonpost.com/technology/2021/10/25/mark-zuckerberg-
facebook-whistleblower/

Facebook. (2021). Privacy-enhancing technologies and ads. Facebook Newsroom.

https://about.fb.com/news/2021/08/privacy-enhancing-technologies-and-ads/

Federal Trade Commission. (2021, August 19). FTC Alleges Facebook Resorted to Illegal

Buy or Bury Scheme to Crush Competition after String of Failed Attempts to


Innovate. FTC News & Events. https://www.ftc.gov/news-events/news/press-
releases/2021/08/ftc-alleges-facebook-resorted-illegal-buy-or-bury-scheme-crush-
competition-after-string-failed

Friedersdorf, C. (2018, March 30). In Defense of the Ugly Facebook Memo. The Atlantic.

https://www.theatlantic.com/politics/archive/2018/03/in-defense-of-the-ugly-
facebook-memo/556919/
29

Gallagher, F. (2021, December 3). Facebook failing to tackle COVID-19 misinformation

posted by prominent users: Report. ABC News.


https://abcnews.go.com/Technology/facebook-failing-tackle-covid-19-
misinformation-posted-prominent/story?id=81451479

Gayle, D. (2021, September 14). Facebook knows Instagram is toxic for teen girls,
company

documents show. The Guardian.


https://www.theguardian.com/technology/2021/sep/14/facebook-aware-instagram-
harmful-effect-teenage-girls-leak-reveals

Hao, K. (2021). The Facebook Whistleblower Says its Algorithms are Dangerous. Here’s

Why. https://www.technologyreview.com/2021/10/05/1036519/facebook-
whistleblower-frances-haugen-algorithms/

Holmes, A. (2021, April 3). Data on 533 million Facebook users leaked online, including

phone numbers. Business Insider. https://www.businessinsider.com/stolen-data-of-


533-million-facebook-users-leaked-online-2021-4

Khanna, M. (2021). Facebook Crisis: Whistleblower Shares How FB Chose Profit Over

User’s Mental Health. https://www.indiatimes.com/technology/news/facebook-


whistleblower-user-mental-health-550852.html

Kvalnes (2019). Moral reasoning at work: Rethinking ethics in organizations (2nd ed.).

London: Palgrave Macmillan.


30

Lawton, G., & Wigmore, I. (2023). AI ethics (AI code of ethics). WhatIs.com.

https://www.techtarget.com/whatis/definition/AI-code-of-ethics

Mac, R., & Kang, C. (2021, October 3). Whistle-Blower Says Facebook ‘Chooses Profits

Over Safety’. The New York Times.


https://www.nytimes.com/2021/10/03/technology/whistle-blower-facebook-frances-
haugen.html

Mosseri, A. 2016. New feed: Addressing Hoaxes and Fake News – Facebook News

Room, media released.

O’ Neil, C. (2017). The era of blind faith in big data must end. TIW Spreading.

Olga, N. (2023). AI vs. Human Content Moderation: Combining forces for safe online

experiences. LinkedIn. Retrieved from https://www.linkedin.com/pulse/ai-vs-human-


content-moderation-combining-forces-safe-online-olga/

Pedler, T. (2023, May 13). Urgent warning to Facebook users as hackers can break into

your account from a simple email. The Sun.


https://www.thesun.co.uk/tech/22348423/urgent-warning-facebook-users-hackers-
new-way-break-devices/

Perrigo, B. (2022, December 14). New Lawsuit Accuses Facebook of Contributing to

Deaths From Ethnic Violence in Ethiopia. Time. https://time.com/6240993/facebook


-meta-ethiopia-lawsuit/
31

Rowland, B. (2010). The Facebook Story.

http://digitalcommons.kennesaw.edu/etd?utm_source=digitalcommons.kennesaw.edu
%2Fetd%2F71&utm_medium=PDF&utm_campaign=PDFCoverPages

Rosencrance, L. & Loshin, P. (n.d.). Two-factor authentication (2FA). TechTarget.

https://www.techtarget.com/searchsecurity/definition/two-factor-authentication

Saurwein, F., and Smith, CS (2021). Automated Trouble: The Role of Algorithmic

Selection in Social Media Platforms. Media and Communication, Vol. 9(4), 222-233.

Schneider, AP. (2015). Facebook Expands Beyond Harvard.

https://www.thecrimson.com/article/2004/3/1/facebook-expands-beyond-harvard-
harvard-students/

Sen, A., & Zadroznyr, B. (2020, August 11). QAnon groups have millions of members on

Facebook, documents show. NBC News. https://www.nbcnews.com/tech/tech-


news/qanon-groups-have-millions-members-facebook-documents-show-n1236317

Silverman, H., & Huang, L. (2017). Fighting Engagement Bait on Facebook. Facebook

Newsroom. https://newsroom.fb.com/news/2017/12/news-feed-fyi-fighting-
engagement-baiton-facebook

Solon, O., & Farivar, C. (2019, April 16). Mark Zuckerberg leveraged Facebook user data

to fight rivals and help friends, leaked documents show. NBC News.

https://www.nbcnews.com/tech/social-media/mark-zuckerberg-leveraged-facebook-
user-data-fight-rivals-help-friends-n994706/
32

Srinivasan, D. (2019). The Antitrust Case Againts Facebook: A Monopolist’s Journey

Towards Pervasive Surveillance in Spite of Consumers’ Preference for Privacy.


Berkeley Business Law Journal. Vol. 16:1.

Stakepole, T. (2022, November 9). Content Moderation Is Terrible by Design. Harvard

Business Review. https://hbr.org/2022/11/content-moderation-is-terrible-by-design

Transparency International. (2022). Internal whistleblowing systems. Transparency

International. https://www.transparency.org/en/publications/internal-
whistleblowingsystems#:~:text=IWS%20provide%20safe%20channels%20for,and%2
0guide%20the%20organisation's%20response.

TrustRadius. (2023). Whistleblowing. TrustRadius.

https://www.trustradius.com/whistleblowing

Valmary, S. (2023, June 17). Moderating Content on Facebook: No Job for Humans. New

Straits Times.
https://www.nst.com.my/opinion/columnists/2023/06/921060/moderating-content-
facebook-no-job-humans/

WhistleBlower Software. (n.d.). Security. WhistleBlower Software.

https://whistleblowersoftware.com/en/security

Zuboff, S. (2015). Big other: surveillance capitalism and the prospects of an information

civilization. Journal of Information Technology, 30(1), 75-89.


33

APPENDIX

You might also like