Group Assignment Full Report - Meta FB
Group Assignment Full Report - Meta FB
SUBMITTED TO
DR. UMMU AJIRAH ABDUL RAUF
2
Not able to discuss Minimal ability to Some ability to Able to discuss the Able to discuss
ANALYTICAL 10 marks the given task discuss the given discuss the given task with good
SKILLS task given task illustration
WRITING SKILLS The content was The content was The content was The content was The content was
20 marks not relevant to the minimally relevant to generally relevant to relevant to the very relevant to
a) Content
given task the given task the given task given task the given task
PRESENTATION RUBRICS
Area/Activity 4 3 2 1
Visual Appeal There are no errors in There are some errors in There are many errors in There are many errors in
(4M) spelling, grammar, spelling, grammar, and spelling, grammar, and spelling, grammar, and
and punctuation. punctuation. Too much punctuation. Too much punctuation. The slides were
Information is clear and information on two or information was contained difficult to read and too much
concise on each slide. more slides. on many slides. information had been copied
Visually Significant visual appeal. Minimal effort made to onto them. No visual appeal.
appealing/engaging.
make slides appealing or
too much going on.
Extensive knowledge of Most showed a good Few members showed good Presenters did not understand
Comprehension topic. Members showed understanding of the topic. understanding of some parts of the topic. Most questions
(4M) complete understanding All members can answer the topic. Only some members were answered by only one
of the assignment. most of the audience's accurately answered questions. member, or most of the
Accurately answered all questions. information was incorrect.
questions posed.
Presentation Skills Regular/constant eye Most members spoke to Members focused on only Minimal eye contact by
(4M) contact, the audience was most of the audience; part of the audience. more than one member
engaged, and presenters steady eye contact. Sporadic eye contact by focusing on small part of
held the audience’s The audience was more than one presenter. audience.
attention. engaged by the The audience was distracted. The audience was not engaged.
Appropriate speaking presentation. Speakers could be heard by Most presenters spoke too
volume & body language. Most presenters spoke at only half of the audience. quickly or quietly making it
a suitable volume. Body language was distracting. difficult to understand.
Some fidgeting by Inappropriate/disinterested
member(s). body language.
4
Content (4M) The presentation was a The presentation was The presentation was The presentation was a brief
concise summary of the a good summary of informative, but several look at the topic, but many
topic with all questions the topic. elements went unanswered. questions were left
answered. Comprehensive Most valuable information Much of the information is unanswered.
and complete coverage of covered; little irrelevant irrelevant; coverage of some Most of the information is
information. info. of major points. irrelevant and significant
points left out.
All presenters knew the Slight domination of one Significant controlling by Unbalanced presentation
information, participated presenter. Members helped some members with one or tension resulting from
Preparedness/Particip equally, and helped each each other. minimally contributing. over-helping.
ation/ Group other as needed. Very well prepared. Primarily prepared but with Multiple group
Dynamics (4M) Extremely prepared and some dependence on just members not
rehearsed.
reading off slides. participating.
Evident lack of
preparation/rehearsal.
Dependence on slides.
5
PRESENTATION
INDIVIDUAL MARKS
1.
NAME: NURUL SHAZALINA ZAINUDIN
ID NUMBER: P131367
PRESENTATION 20M 5%
MARKS
Visual Appeal Comprehension Presentation Skills Content (4M) Preparedness/Participation/
(4M) (4M) Group Dynamics
(4M) (4M)
2.
NAME: LAW WU JIA
ID NUMBER: P131100
PRESENTATION 20M 5%
MARKS
Visual Appeal Comprehension Presentation Skills Content (4M) Preparedness/Participation/
(4M) (4M) Group Dynamics
(4M) (4M)
3.
NAME: LAW JUN YI
ID NUMBER: P131047
PRESENTATION 20M 5%
MARKS
Visual Appeal Comprehension Presentation Skills Content (4M) Preparedness/Participation/
(4M) (4M) Group Dynamics
(4M) (4M)
4.
NAME: CARMEN CHAI WAN YIN
ID NUMBER: P131045
PRESENTATION 20M 5%
MARKS
6
5.
NAME: KUA TIONG YEE
ID NUMBER: P131374
PRESENTATION 20M 5%
MARKS
Visual Appeal Comprehension Presentation Skills Content (4M) Preparedness/Participation/
(4M) (4M) Group Dynamics
(4M) (4M)
6.
NAME: ASHRAF AZUAS
ID NUMBER: P131279
PRESENTATION 20M 5%
MARKS
Visual Appeal Comprehension Presentation Skills Content (4M) Preparedness/Participation/
(4M) (4M) Group Dynamics
(4M) (4M)
7
STUDENTS BIOGRAPHY
STUDENTS BIOGRAPHY
STUDENTS BIOGRAPHY STUDENTS PHOTO
Hi, I am Ashraf bin Azuas, a Senior
Aeronautical Engineer (BEM GE148636A),
working in aircraft MRO and Engineering
Design companies for almost 13 years since
graduation from USM in Bachelor
Engineering Aerospace back in 2010. I have
specialized engineering skills and various
experiences in operation management. I did
my Master Engineering Manufacturing
System UKM back in 2021 to understand
further the process in the production of
aerospace parts and leverage my skills and
knowledge with it. I am now doing my
MBA in UKM to learn about organization
management and to further my knowledge
in business study. With this knowledge, I
will be ready to venture by contributing the
best of myself in my current or future
company and my businesses.
Hi, my name is Nurul Shazalina Zainudin. I
hold a PhD in parasitology from the UKM
Faculty of Medicine, as well as an MSc in
molecular medicine and a BSc in
biotechnology, all from USM. As a trained
auditor, I currently holds the position of
Quality Manager at a pharmaceutical
company where I assures adherence to ISO
9001:2015 standards. I am also a Certified
Halal Executive. My background includes
working as a Scientific Manager in the
medical device sector. I have successfully
started and runs six early education
branches outside of the corporate world.
8
ACKNOWLEDGEMENT
This collaborative effort is the result of the team members' excellent idea contributions. We
sincerely appreciate all of the team members' help and understanding in getting this work
done, as well as the support from their families and friends.
We also would like to express our gratitude to the previous lecturers who inspired us and
shared their insights and experiences, which sparked the growth of the information for this
group project.
11
TABLE OF CONTENTS
2.1.1 Obsession of the organization with growth and profit over safety .................. 15
1.0 INTRODUCTION
Few organizations have had as much impact and influence in the constantly changing
world of social media and technology as Facebook. It was initially created as a platform to
link people around the world, but it swiftly developed into a digital powerhouse that not only
revolutionized communication, but also influenced the entire nature of 2+ billion human
interactions across the world (Srinivasan, 2019). Facebook has unquestionably left a lasting
influence on contemporary society from its modest origins in a college dorm to its current
position as a worldwide giant. However, such power also brings a variety of difficulties, some
of which have created serious moral conundrums and brought the firm into the public eye for
all the wrong reasons.
Facebook is an online social media and networking service owned by Meta Platform
Inc, an American multinational technology conglomerate based in Menlo Park, California.
Facebook is a result of the creation of Facemash website which was first introduced in 2003
by Mark Zuckerberg while attending Harvard University (Cruz & Dias, 2020). In his early
year, Zuckerberg created Facemash for fun with the objective to compare two pictures
obtained from Houses data to see who is the hottest. Upon launch, the website become
sensational as it attracts 450 visitors and 22,000 photo views in the first 4 hours. Concern on
the network traffic and violation of copyrights, security, and privacy, the website is shut down
immediately (Cruz & Dias, 2020).
Inspired by Crimson editorial on his own Facemash idea, Zuckerberg and Eduardo
Saverin invested $1000 each to create the “thefacebook.com” in 4 February 2004 (Brugger,
2015). The membership was initially limited to Harvard students with at least half of
undergraduate registered. By getting good support, the team, consisting of Zuckerberg and
Eduardo along with new team members Dustin Moskovitz, Andrew McCollum, and Chris
Hughes helped to grow the website (Brugger, 2015). In March 2004, Facebook expanded to
other universities including Colombia, Yale, Stanford, and all Ivy League colleges, Boston
University, NYU, MIT, and most universities in US and Canada (Schneider, 2004). With the
help from investors such as Peter Thiel, Accel Partners, and Jim Breyer, the facebook dropped
its “the” and became facebook.com as known today in May 2005 and accessible to all
(Cassidy, 2006).
13
A situation where a decision-maker must pick between two or more options that are
about equal in moral importance or ethical value is known as a dilemma (Kvalnes, 2019).
Michael Rion stated that “Ethical dilemmas often arise as the unintended consequences of
well-intentioned actions, not from unethical motives.” (Anupama & Kumari, 2014). Facebook
has unquestionably left a lasting influence on contemporary society from its modest origins in
a college dorm to its current position as a worldwide giant. However, such power also brings
a variety of difficulties, some of which have created serious moral inconsistencies and
brought the firm into the public eye for all the wrong reasons.
Facebook began to modify its algorithms so that only the most relevant posts were
seen in a user's feed as the social network (and the Internet overall) became larger and there
14
simply became too much information for one user to understand. Since that time, Facebook
algorithms have placed a high value on sorting and presentation (Silverman and Huang,
2017). With a focus on postings about family and friends rather than commercial
relationships, Facebook algorithms are designed to adjust the order and presentation of posts
so that users only see what Facebook believes to be the most relevant posts for them. Since
2018 (Mosseri, 2018), this concentration has grown even more intense, which could provide
challenges for organizations while also heightening public perception of Facebook's
significance. As a result, businesses must produce even more genuine postings and active
involvement. They must, in other words, be "friends" with their clients. The change only
means that the algorithms favor quality material, based on genuine connection and
meaningful contacts. It does not mean that businesses are no longer promoted at all in the
users' news feeds (Rowland, 2010).
One of the major problems in this algorithm issue, is there isn't a single Facebook
team in control of the entire content-ranking mechanism. Depending on the goals of their
team, engineers create and incorporate their own machine-learning models into the mix. For
instance, teams tasked with deleting or demoting offensive content, referred to as integrity
teams, will only train models for identifying various forms of offensive content. As part of its
early "move fast and break things" principles, Facebook made this choice. It created an
internal tool called FBLearner Flow that enabled engineers without prior machine learning
experience to create any models they required. According to one data point, more over 25% of
Facebook's engineering team was already utilizing it in 2016 (Hao, 2021).
In an interview, Francesca Hauger said that it was discovered that people were more
likely to feel angry than other emotions when using the Facebook algorithm, which is used to
display the appropriate information in a user's News Feed. Haugen claims that Facebook has
been aware of this through its own research, but it also understands that if the algorithm were
changed to make users feel safer, this would cause users to spend less time on the site, which
would lead to less clicks and, ultimately, less revenue (Khanna, 2021).
This report's main goal is to investigate comprehensively the potential hazards created
by the algorithms used by the META (previously Facebook) platform. It attempts to
investigate the underlying factors that fuel the possible risks connected to these algorithms,
15
revealing their complex workings and the effects they have on users and society at large. This
study looks into the potential drawbacks of algorithmic decisions in an effort to highlight how
urgent it is to address these problems. The report will also look at innovative solutions that
have been developed as safeguards against the dangers of algorithmic decision-making on
META.
There has been workplace dilemma for Meta Facebook’s employee being discontent
with the unethical workplace practices going on in the organization with data privacy and the
fact that profit is being prioritized over safety. The struggle of unethical workplace practices
uncovered with the Cambridge Analytica (CA) scandal, the largest known data leak in
Facebook history (Confessore, 2018). Later, many employees including ex-employee of
Facebook, Frances Haugen whistle blow on the unethical workplace practices of Facebook.
The world gained the fact that Facebook become increasingly leaky and many of the
employees has become discontent over the situation (Mac & Kang, 2021).
2.1 Causes
2.1.1 Obsession of the organization with growth and profit over safety
The obsession of the organization with growth and profit over safety that makes not
only the employees but the public questioned the credibility of the organization. A
controversial internal Facebook memo with the title of “The Ugly” featuring the Facebook
Vice President, Andrew Bosworth’s statement on the aggressiveness in connecting people. In
the internal memo, Bosworth stressed on expressed his belief in grow and connect people at
all cost even if someone dies from suicide or it leads to terrorist attacks. The memo had
showed the executive forced the employee to embrace Meta Facebook’s growth strategy
without consider the consequences and the dark side of its action made its ethics questionable
(Friedersdorf, 2018).
16
Unethical tactics are inevitable in the organization and the employees can only choose
to either take it or leave it, which sparked discontentment. This is due to the way Meta
Facebook made its revenue via advertisement which target its users. While the Facebook
Chief Executive Officer (CEO), Mark Zuckerberg have consistently make statement to
reassure the safety of its platform, leaked company documents showed that Mark Zuckerberg
along with his board and management team leveraged the user data to the companies it
partnered while cutting the access of its data to its potential competitors (Solon & Farivar,
2019).
Data leakage is a continuous issue throughout the years, it sparks controversy over
platform safety in 2018 CA scandal. CA ex-employee, Christopher Wylie whistle blow on CA
improperly acquired Facebook private data to build and sell psychological profiles to political
campaigns without users’ consent. It was found that employees tried to alert Facebook as
early as September 2015, and has asked CA to delete it but failed to follow up. The scandal
has led to disappointed Facebook engineers quit or request to shift to other divisions such as
Whatsapp or Instagram over ethical concern (Bhardwaj, 2018). Later in 2021, Facebook
experience data leak again involving 533 million Facebook users to have their personal data
being published on a low-level hacking forum to be viewed for free. Published personal data
including phone numbers, Facebook IDs, locations, birth dates, email addresses etc. The leak
is due to Facebook hastily manage its vulnerability of its platform allowing data being
scraped. (Holmes, 2021).
the algorithms, based on their teams’ objective. This part of the Facebook “move fast, break
things” culture but it gone so complex. As a consequence, no one can keep track and control
the type of content that is served to the users’ newsfeed, which leads the integrity and security
issues of the algorithms (Hao, 2021). The AI algorithm might end up with negative content
amplification such as those with misinformation, hatred speech, extreme negative viewpoints,
polarization content and even conspiracy.
Poor and insufficient content moderation system coupled with risk of negative content
amplification by AI algorithm raise the concern of employee about the dark side of the
platform. Despite content moderation can be done by automation or employees, there is no
limit to the amount of content can be posted on the platform and the system might rely on the
reporting by the users, which is inconsistent. Besides, it is difficult to balance mental health of
the employees who constantly exposed to harmful content and the limitation of automation in
linguistic and cultural competencies (Stakepole, 2022). It became a problem for over 180
sacked Sama employees, which is subcontracted content moderators seeking compensation
due to mental health damage and “unlawful” dismissal (Valmary, 2023).
2.2 Impacts
2.2.1 Cybercriminal
A platform with frequent data leak can cause the users prone to cybercriminals, such
as phising, scam, hacking. With a little amount of private information is usually enough for
bad actors to perform hacking, or impersonate people as well as commit fraud (Holmes,
2021). Earlier this year, Meta Facebook warned users as they found malicious software
claimed to offer ChatGPT-based tools for the Facebook user. Malware operators tried to trick
users into clicking malicious link or software which ultimately given a chance for them to
break into the phone device (Pedler, 2023).
18
Meta Facebook obsession over profit resulted into their unethical tactic to crush
competitors with illegal buy-and-bury scheme. In the effort to stay as the monopolist of its
field, Facebook uses its open space platform as a bait to success for third party software
developers, and bury successful apps that seemingly potential competitors. This is not only
against the antitrust law, the monopolist also prevents competition for better innovation and
limit user’s choice (Federal Trade Commission, 2021).
As Meta Facebook has to ability to remove certain content, there are lack of standards
for a fair censorship, which its decision might lead to political manipulation. For instance, the
action of selective censorship of propaganda by the anti-government group in Vietnam as
election approaches. Although the CEO of the company, Mark Zukerberg, champions
freedom of speech, his action in Vietnam contradicts to what he claimed to be. During
Vietnam’s party congress, Mark chooses to comply with the ruling Communist party to the
censorship of anti-government post to avoid being knocked offline, losing annual revenue of
USD 1 billion. This not only contradicts with the freedom of speech practice by the company,
it also allows nearly total control of the social media platform by the ruling party (Dwoskin,
2021).
A leak suggest that Meta Facebook is aware of the harmful effect of Instagram on the
mental health of teenage girls by creating negative body image issue but have kept this news
for 2 years. According to the leaked internal research, 32% of teenage girl users says that
when they feel bad about their bodies, Instagram made them feel worse. This contributes to
rise in anxiety, depression and ultimately resulting in having suicidal thoughts. This can
further be proven by a finding showing that 13% of UK and 6% of US ‘s suicidal thoughts
can be traced back to the social media apps. Other source also cites that 40% of Instagram
users who reported feeling “unattractive”, admitted the feeling began from the apps (Gayle,
2021). As Facebook AI algorithm amplifies the content that is frequently visit by the users,
those users with existing negative view and low self-esteem, will only be fed with more
19
extreme content based on their engagement with the social media. This ultimately resulting in
the deterioration of mental health of certain users (Hao, 2021).
There are a numbers of misinformation due to Meta Facebook’s algorithm and its
group feature has found to aid extremists and activists to gain audiences and support such as
from QAnon conspiracy theorists, anti-vaccination activists. QAnon theory conspiracy started
as an unfounded theory that President Trump is fighting against Satan-worshipping
paedophiles, which President Trump play along with the misinformation and being viewed as
Hero among the believers (Sen, 2020). Besides that, COVID-19 denier and anti-vaccination
activists also spread fake news about COVID-19 is a scam or hoax bring about by the
government, which causes difficulties in curbing COVID-19 (Gallagher, 2021).
The security issues of Meta Facebook causes spread of hatred towards a person or a
group of people, which pose a threat on human safety. At some point, AI amplification of
such post, coupled with failure of Facebook to remove it has contributed to death of ethnic
minorities in the war. In one lawsuit, a plaintiff named Abrham Meareg, also an ethnic
minority, alleged that Facebook refuse to remove hateful post towards his father, a respected
Chemistry professor in Bahir Dar University in the Amhara region of Ethiopia despite he has
reported those post many times. As a result, his father was then followed and shot to death
(Perrigo, 2022).
To address the dilemma Meta Facebook's employees face, who struggle with their
dissatisfaction with the organization's unethical practices prioritizing data privacy and profits
over safety, Meta Facebook should introduce an effective whistle-blowing system. Such a
system would offer a secure place for employees and stakeholders to report any instances of
unethical behaviour or violations of company policies (Transparency International, 2022). By
implementing this whistle-blowing system, Meta Facebook can encourage transparency and
ensure the protection and anonymity of those who blow the whistle.
20
Next, to solve the above dilemma, Meta Facebook has proactively implemented
Privacy Enhancing Technologies (PETs) to fortify its data protection measures in response to
potential data breaches on all the platforms, including Facebook, Instagram and WhatsApp.
By employing PETs, Meta strategically reduces the extent of data processing, effectively
safeguarding personal information from vulnerabilities. These advanced PETs serve a dual
purpose: bolstering data security and optimizing digital advertising for marketers to seek
efficient solutions (Facebook, 2021).
While Meta Facebook's role as a social connectivity hub is well-known for its
significant involvement in social advertising, social advertising is linked to a substantial
portion of Meta's revenue stemming from advertising. The company operates an expansive
database encompassing user profiles, personal details, IDs, and passwords. Therefore, it is
Meta's responsibility to prevent the potential data breach.
To address the concern of data leakage and bolster user data privacy, Meta Facebook
should strengthen its security measures by implementing enhanced Two-Factor
Authentication (2FA). Two-Factor Authentication, also referred to as two-step verification,
22
constitutes a robust security protocol where users must provide two different authentication
factors—such as a secure password and biometric information—to verify their identity
(Rosencrane & Loshin, n.d.). This approach serves as a formidable defence, ensuring that
even if data is compromised, users can still safeguard their personal information, as
unauthorized parties cannot meet the dual-factor authentication requirements for account
access (Rosencrane & Loshin, n.d.).
Introducing Two-Factor Authentication serves the dual purpose of strengthening user data
security and safeguarding the resources users can access. Adding an extra protection layer
will raise the difficulty for potential attackers attempting to breach online accounts. The
authentication process is divided into several factors (Rosencrane & Loshin, n.d.):
• Knowledge (password or PIN)
• Possession (ID card, mobile device, or smartphone)
• Biometrics (facial or fingerprint recognition)
• Context (location and time)
On the other side of the innovation solution, it is also recommended that Meta
Facebook should focus on managing its usage of Artificial Intelligence (AI) in its social
media application and practicing AI ethics. AI ethics is a system of moral principles and
techniques that guide the ethical development, deployment, and usage of AI technologies.
Simply implementing AI is insufficient as it is crucial that the utilization of AI aligns with
ethical principles.
23
According to Lawton and Wigmore (2023), it was suggested to apply AI ethics. AI has
become essential to most products and services, particularly the social media platform and
organizations are starting to develop their own AI codes of ethics.
The objective of an AI code of ethics is to give users guidance when faced with an
ethical dilemma in relation to the utilization of AI. The advancement of AI has brought about
significant ethical concerns that require a thoughtful and principled method for how it is
created and put into use. AI ethics encompasses a system of moral values and guidelines that
guide the responsible utilization of AI technology. While AI offers unparalleled opportunities,
its potential pitfalls and unintended consequences underscore the importance of adhering to
AI ethics. (Lawton & Wigmore, 2023)
The benefits of the AI code of ethics include aligning with customer-centric and
socially conscious trends, enhancing brand perception, and fostering positive societal impact.
Furthermore, it also creates a cohesive workplace environment by reassuring employees about
their company's values. (Lawton & Wigmore, 2023)
The adoption of a robust AI code of ethics presents Meta Facebook with both a moral
imperative and a strategic opportunity. By prioritizing inclusivity, transparency, and
responsible data usage, Meta can gain user trust and align with customer-centric trends. Such
ethical considerations will guide decision-making, preventing misuse and ensuring AI
systems that adhere to human values. This commitment positions Meta at the forefront of
responsible AI innovation.
The next innovative solution is for Meta Facebook to hire an external and
independent regulator to audit its system. Saurwein & Spencer-Smith (2021), suggests the
implementation of external and independent auditing for the purpose of inspecting and
auditing content moderation system. It is recommended that an independent regulator should
be empowered and resourced to enforce platforms’ due diligence and transparency
obligations. However, the effectiveness of this governance framework is conditional upon the
transparency level between the platform’s independent regulator. The independent regulator
24
should have the power to demand any type of granular evidence that is necessary for it to
fulfil its supervisory tasks, and to impose fines or other corrective methods if information is
not provided in a timely manner.
Moreover, Saurwein & Spencer-Smith (2021) also highlighted that such regulations
shall mandate large online platforms to enhance the transparency of their algorithmic systems,
including but not limited to providing opportunities to opt out of profiling and
personalization, to protect services from manipulation, and to carry out risk assessments to
avoid the spread of illegal content, restrictions of fundamental rights, and manipulation.
An example can be seen in the recent introduction of the Digital Services Act by the
European Commission, though in its infancy, this initiative has signified a pivot towards
tighter oversight over internet platforms and their algorithm-driven functions. The proposal
advocates for external auditing and EU-level technical assistance to inspect content
moderation, recommender algorithms, and online advertising, with the aim of establishing
accountability and ensuring responsible digital conduct. (Saurwein & Spencer-Smith 2021)
With compulsory audit and inspection powers, a regulator is empowered to rectify the
information asymmetry that currently defines the public’s relationship with large technology
companies. These powers are essential for effective oversight and compliance: a regulator
would struggle to achieve its statutory goals without these powers in place.
The last innovation solution for Meta Facebook is to create strong foundations of
enterprise risk management aligned with leading practices and regulatory requirements. In
dealing with the ethical landscape of algorithmic systems, a well-rounded approach is
imperative, encompassing strategy, design, deployment, and monitoring. These are key steps
for the responsible management of algorithmic risks. (Managing Algorithmic Risks, n.d.)
Design, development, deployment, and use: create procedures and strategies that align
with the governance framework to handle the entire algorithm life cycle, spanning from data
25
Monitoring and testing: Create procedures to evaluate and supervise algorithmic data
inputs, operations, and outcomes, making use of cutting-edge tools as they become accessible.
Seek objective reviews of algorithms by internal and external parties. (Managing Algorithmic
Risks, n.d.)
4.0 CONCLUSION
First of all, the discovery of unethical working practices at Meta raises questions about
the organization's values, which put user privacy and safety at odds with development and
profit. The Cambridge Analytica crisis and the accompanying leaks highlight the moral
quandaries that workers face. The company's fixation with growth and Andrew Bosworth's
internal memo serve as examples of how financially motivated choices can have unethical
consequences. The unethical management of user data for targeted advertising additionally
jeopardizes user confidence and raises doubts about Meta's dedication to data protection.
Innovative solutions that put transparency, user safety, and ethical technology use first
are needed to solve these problems. Employers can give staff members the freedom to report
unethical activity while protecting their privacy by implementing a thorough whistleblowing
mechanism. While adhering to AI ethics ensures ethical AI development and deployment,
26
privacy enhancing technologies (PETs) provide a technical solution to safeguard user data and
improve data protection methods.
In the face of these difficulties, Meta has the chance to prove its dedication to moral
behavior and user satisfaction. Meta can pioneer the way for a digital ecosystem that values
integrity, user privacy, and responsible technology use by putting these cutting-edge ideas
into practice. Growing while maintaining moral standards is a difficult task, but the decisions
made today will have an impact on the organization's influence, reputation, and societal
impact for years to come. The decisions made by Meta will have an impact on the worldwide
technological and ethical landscape in this fast-changing digital environment.
27
REFERENCES
https://neuroflash.com/blog/ai-content-filtering/
Bhadwaj, P. (2018, April 9). Facebook employees are quitting or asking to switch
Cassidy, J. (2006). How Hanging in the Internet Became Big Business. The New Yorker.
Confessore, N. (2018, April 4). Cambridge Analytica and Facebook: The Scandal and the
Cruz, B. S., & Dias, M. O. (2020). Does Digital Privacy Really Exist? When the
https://www2.deloitte.com/content/dam/Deloitte/us/Documents/us-Meta-PETs-
Whitepaper.pdf
https://www2.deloitte.com/us/en/pages/risk/articles/algorithmic-machine-learning-
risk-management.html
Dwoskin, E., Newmyer, T., & Mahtani, S. (2021, October 25). The case against Mark
Zuckerberg: Insiders say Facebook’s CEO chose growth over safety. The Washington
Post. https://www.washingtonpost.com/technology/2021/10/25/mark-zuckerberg-
facebook-whistleblower/
https://about.fb.com/news/2021/08/privacy-enhancing-technologies-and-ads/
Federal Trade Commission. (2021, August 19). FTC Alleges Facebook Resorted to Illegal
Friedersdorf, C. (2018, March 30). In Defense of the Ugly Facebook Memo. The Atlantic.
https://www.theatlantic.com/politics/archive/2018/03/in-defense-of-the-ugly-
facebook-memo/556919/
29
Gayle, D. (2021, September 14). Facebook knows Instagram is toxic for teen girls,
company
Hao, K. (2021). The Facebook Whistleblower Says its Algorithms are Dangerous. Here’s
Why. https://www.technologyreview.com/2021/10/05/1036519/facebook-
whistleblower-frances-haugen-algorithms/
Holmes, A. (2021, April 3). Data on 533 million Facebook users leaked online, including
Khanna, M. (2021). Facebook Crisis: Whistleblower Shares How FB Chose Profit Over
Kvalnes (2019). Moral reasoning at work: Rethinking ethics in organizations (2nd ed.).
Lawton, G., & Wigmore, I. (2023). AI ethics (AI code of ethics). WhatIs.com.
https://www.techtarget.com/whatis/definition/AI-code-of-ethics
Mac, R., & Kang, C. (2021, October 3). Whistle-Blower Says Facebook ‘Chooses Profits
Mosseri, A. 2016. New feed: Addressing Hoaxes and Fake News – Facebook News
O’ Neil, C. (2017). The era of blind faith in big data must end. TIW Spreading.
Olga, N. (2023). AI vs. Human Content Moderation: Combining forces for safe online
Pedler, T. (2023, May 13). Urgent warning to Facebook users as hackers can break into
http://digitalcommons.kennesaw.edu/etd?utm_source=digitalcommons.kennesaw.edu
%2Fetd%2F71&utm_medium=PDF&utm_campaign=PDFCoverPages
https://www.techtarget.com/searchsecurity/definition/two-factor-authentication
Saurwein, F., and Smith, CS (2021). Automated Trouble: The Role of Algorithmic
Selection in Social Media Platforms. Media and Communication, Vol. 9(4), 222-233.
https://www.thecrimson.com/article/2004/3/1/facebook-expands-beyond-harvard-
harvard-students/
Sen, A., & Zadroznyr, B. (2020, August 11). QAnon groups have millions of members on
Silverman, H., & Huang, L. (2017). Fighting Engagement Bait on Facebook. Facebook
Newsroom. https://newsroom.fb.com/news/2017/12/news-feed-fyi-fighting-
engagement-baiton-facebook
Solon, O., & Farivar, C. (2019, April 16). Mark Zuckerberg leveraged Facebook user data
to fight rivals and help friends, leaked documents show. NBC News.
https://www.nbcnews.com/tech/social-media/mark-zuckerberg-leveraged-facebook-
user-data-fight-rivals-help-friends-n994706/
32
International. https://www.transparency.org/en/publications/internal-
whistleblowingsystems#:~:text=IWS%20provide%20safe%20channels%20for,and%2
0guide%20the%20organisation's%20response.
https://www.trustradius.com/whistleblowing
Valmary, S. (2023, June 17). Moderating Content on Facebook: No Job for Humans. New
Straits Times.
https://www.nst.com.my/opinion/columnists/2023/06/921060/moderating-content-
facebook-no-job-humans/
https://whistleblowersoftware.com/en/security
Zuboff, S. (2015). Big other: surveillance capitalism and the prospects of an information
APPENDIX