Ia2 QB Ver1
Ia2 QB Ver1
1. Make use of Westin’s categorization and explain how people can be classified based on their
privacy choices.
Ans:
Fundamentalists (25%)
• These individuals are highly protective of their privacy and are generally skeptical of
organizations collecting personal data.
• They avoid sharing information online and are likely to opt out of services that require
access to their private data.
Example: A fundamentalist might never post personal photos on social media, avoid using
apps that track location, and disable cookies in their web browser to prevent data tracking.
Pragmatists (60%)
• Pragmatists weigh the benefits and risks of sharing personal information. They adjust
their privacy settings depending on the context and trust certain platforms more than
others.
• They may share limited personal data if they feel it’s necessary for a useful service but
will also re-evaluate their privacy choices over time.
Example: A pragmatist might share vacation pictures on Facebook but only with close
friends, or use a fitness tracking app while disabling location sharing.
Unconcerned (15%)
• Unconcerned users freely share personal information online without much thought
about privacy risks. They are less worried about data breaches or corporate surveillance.
• They often use public social media profiles, accept friend requests from strangers, and
rarely review privacy settings.
Example: An unconcerned user might check into locations on Foursquare, post daily life
updates publicly, and use the same password across multiple sites without considering
potential risks.
5. Write a python snippet to plot a histogram using example data. What are the advantages of
NumPy arrays over lists in python?
6. What is the experiment done by Latanya Sweeney? Explain how re-identification of data can
be done with an example.
Ans: Her experiment show that anonymous health records could be linked to public voter
registration data to re-identify individuals.
Methodology:
1. Data Sources:
o Sweeney obtained anonymized health records from a state agency.
o She also accessed the Massachusetts voter registration list, which contained names,
addresses, dates of birth, and gender (all publicly available).
2. Linking the Data:
o Even though health records didn’t contain names, they included ZIP codes, birth dates,
and gender.
o Sweeney discovered that 87% of the US population could be uniquely identified using
just:
▪ Date of Birth
▪ Gender
▪ ZIP Code
3. Example:
o The Governor of Massachusetts, William Weld, had his anonymized medical records
included in the dataset.
o Sweeney cross-referenced the health records with the voter list using his birth date, ZIP
code, and gender.
o She successfully re-identified his health records — proving the risk of re-identification.
Key Findings:
• Anonymization isn’t enough to protect privacy if data can be linked to public datasets.
• Even with personal identifiers removed, people can be re-identified using minimal attributes
(like birth date + ZIP code).
Impact:
• Sweeney’s work led to stricter privacy laws, including HIPAA (Health Insurance Portability and
Accountability Act).
• It highlighted the risks of data sharing without robust anonymization techniques.
7. Identify the different attributes that can be used identify fake accounts/handles on social
media.
Ans:
• Suspension algorithms: Accounts with spam activity are suspended quickly.
• Blacklist monitoring: URLs from known shortening services (like bitly or tinyurl) are flagged.
• Bot detection systems: Frequency analysis helps spot automated behavior (e.g., posting too
frequently).
• Network analysis: Platforms analyze follower patterns to detect link farms.
• User reporting: Users can report suspicious accounts and phishing attempts.
• Login activity monitoring: Accounts that stay inactive for long periods or show suspicious
login patterns may get suspended.
• Click rate analysis: Platforms track unusually high click-through rates, especially for URLs
leading to phishing or scam pages.
• Spam campaign detection: Platforms identify coordinated spam campaigns and can
suspend multiple linked accounts at once.
8. Identify the positive outcomes of publicizing police interactions on social media? Discuss any
one campaign that backfired.
Ans:
#MyNYPD Campaign
Details:
1. The New York Police Department (NYPD) launched the #MyNYPD campaign on Twitter to
encourage citizens to share positive photos with officers.
2. The campaign aimed to build community trust and showcase positive interactions with law
enforcement.
Why It Backfired:
1. Users started posting negative images of police misconduct and brutality.
2. The hashtag trended for the wrong reasons, amplifying public criticism.
3. The campaign unintentionally revived old controversies and escalated public outrage.
4. Instead of fostering goodwill, it highlighted strained police-citizen relationships.
5. The backlash revealed the risk of public campaigns without anticipating potential negative
sentiment.
9. Identify 7 different e-crimes on social media and give one example for each. - PFFICWC
Ans:
Phishing: Tricking users into providing credentials via fake websites.
→ Example: 2021 Facebook phishing scam, where users received fake login alerts, leading them
to a page that stole their credentials.
Fake customer service accounts: Scammers pose as legitimate organizations.
→ Example: In 2019, fraudsters impersonated Netflix support, through Twitter, to steal credit card
details Twitter.
Fake live streaming videos: During popular events, fake links are promoted.
→ Example: during the FIFA World Cup, scammers posted fake live stream links leading users to
malicious sites instead of match coverage.
Impersonation: Creating fake accounts mimicking real people or organizations.
→ Example: The Elon Musk Twitter scam, where impersonators promised cryptocurrency
giveaways in exchange for small Bitcoin deposits.
Clickbaiting: Misleading links to attract users to malicious sites.
→ Example: The “You won an iPhone!” scam, where users were tricked into clicking fake prize
claim links loaded with malware.
Work-from-home scams: Fake job offers promising easy money.
→ Example: The "Amazon job offer" scam, where victims were asked to pay upfront fees for fake
remote job opportunities.
Compromised accounts: Gaining access to legitimate accounts and misusing them.
→ Example: The 2013 Associated Press Twitter hack, which falsely reported explosions at the
White House, briefly causing stock market turmoil.