Case Study
Case Study
Brief Explanation
In one of the most significant data breaches involving cloud infrastructure, Capital One was targeted
by a cyberattack in March 2019 that exposed sensitive personal information of more than 100
million Americans and 6 million Canadians. The breach was executed by Paige Thompson, a former
employee of Amazon Web Services (AWS), who exploited a misconfigured firewall in Capital One’s
cloud system hosted on AWS.
The breach was not discovered internally but was reported by a GitHub user who found Thompson's
disclosures on public code repositories and Slack discussions. Investigations revealed that Thompson
had used a technique known as Server-Side Request Forgery (SSRF) to query internal AWS resources
and extract credentials for S3 buckets where Capital One stored its customer data.
This event raised serious concerns about cloud configuration security, insider threat potential, and
forensic readiness in cloud-native infrastructures.
The forensic investigation into the Capital One breach was complex because it involved cloud-based
assets, not physical devices. Traditional forensic models — centered on hard disk imaging and
memory analysis — were not applicable here. Instead, the response teams relied on cloud-native
tools and logs.
CloudTrail Log Analysis: AWS CloudTrail logs helped identify when and how API calls were
made to list and access the S3 buckets.
IAM Role Examination: The attacker used temporary IAM credentials retrieved through SSRF.
Investigators verified misuse of roles and permissions.
IP Tracing and Attribution: Thompson’s IP address was linked to GitHub and a personal
domain. Forensic teams connected the online behavior to her known aliases.
Open-Source Intelligence (OSINT): Posts on GitHub, Slack, and Twitter showed that
Thompson had boasted about the breach before being caught.
Digital Evidence Preservation: Chain of custody procedures were maintained using AWS
forensic snapshots and log exports for legal proceedings.
Lack of Forensic Planning: The organization did not have a forensic readiness plan specific to
cloud environments.
No Detection Mechanism: There was no alert system in place for anomalous API calls or
exfiltration attempts.
This breach made it clear that cloud forensics is not just an adaptation of traditional forensics but
requires its own set of tools, practices, and policies — especially in SaaS and IaaS models where
control is limited.
The case falls under U.S. Federal Cybercrime Laws, particularly the Computer Fraud and Abuse Act
(CFAA), which criminalizes unauthorized access to protected computers. Thompson was charged
with wire fraud and computer fraud.
o Immutable logs
o Snapshot tools
Lack of Physical Access: There’s no direct imaging of servers or storage. Data is acquired
through provider APIs and event logs.
Evidence Volatility and Ephemerality: Temporary credentials, session tokens, and virtual
instances often leave limited evidence once terminated.
Furthermore, the case emphasized the concept of "forensic readiness", where organizations
proactively prepare systems to retain evidence in the event of an attack — something Capital One
lacked.
Lessons Learned
The Capital One breach became a teaching moment for cybersecurity and digital forensics
professionals, especially as cloud adoption continues to rise. Several critical lessons emerged:
1. Misconfigurations Can Be Catastrophic
Most data breaches are not due to unknown vulnerabilities but human errors in configuration. Here,
a single WAF rule misstep opened access to millions of records.
Organizations need forensic experts trained in cloud services (AWS, Azure, GCP) to:
Server-Side Request Forgery attacks allow attackers to access internal services that aren’t normally
exposed. In this case, it helped retrieve AWS metadata, which in turn granted temporary credentials
to access data.
The attacker’s own digital footprint — Slack chats, GitHub commits, and even tweets — provided
crucial evidence. This highlights the role of OSINT in digital forensics investigations, especially for
attribution and behavioral profiling.
Traditional IR playbooks often fail in dynamic cloud environments. Logging, alerting, and
snapshotting should be automated and scalable.
While AWS wasn’t directly at fault, this case reminded the world that cloud security is a shared
responsibility. Configuration and access control lie with the customer (Capital One), not the cloud
provider.
Conclusion
The Capital One data breach serves as a watershed moment in the digital forensics and cloud security
landscape. It demonstrated that simple misconfigurations, when combined with publicly known
exploitation techniques like SSRF, can lead to massive data exfiltration events.
It also shifted the focus of forensic science from physical devices to virtual environments, API logs,
and cloud-native artifacts. The breach showed the importance of having proper access control,
forensic logging, and a response plan tailored for the cloud.
From a legal and forensic standpoint, the case established new norms in:
Brief Explanation
On May 12, 2017, a global ransomware outbreak named WannaCry began infecting computers in
over 150 countries. It primarily targeted Windows machines by exploiting a known vulnerability in
the SMBv1 protocol (CVE-2017-0144), using an exploit known as EternalBlue, which had been leaked
from the NSA by a hacking group called the Shadow Brokers.
The ransomware encrypted users’ files and demanded payments in Bitcoin to decrypt them. Victims
included hospitals (notably the UK's NHS), telecom companies, government agencies, and businesses
like FedEx and Renault. It caused an estimated $4–6 billion in global damages.
The attack was automated, spreading rapidly through internal networks without human interaction.
What made WannaCry particularly devastating was that many systems had not applied the
Microsoft patch (MS17-010) released two months earlier, despite warnings.
Organizations impacted by WannaCry had to rapidly shift into full-scale Incident Response (IR) mode,
with coordination between IT, cybersecurity teams, legal, and external response vendors. For many,
this incident was a wake-up call that IR capabilities were underdeveloped.
3. Forensic Imaging:
o Created forensic copies of compromised systems for analysis without tampering with
evidence.
o Urgently pushed MS17-010 and disabled SMBv1 across endpoints and servers.
o Verified that clean backups existed and restored systems without paying the ransom.
Delayed Patching: Had not applied critical Windows security patches despite public alerts.
ISO/IEC 27035
6. Lessons Learned: Analyze the root cause and improve the IR process.
In the WannaCry case, these phases were often followed retrospectively. For example, many
organizations developed IR teams only after being attacked, emphasizing the importance of the
Preparation phase.
The "kill switch" domain, discovered by researcher Marcus Hutchins, prevented further infections
but did not help systems already encrypted. His analysis and reverse engineering of the malware
became a model example of technical incident response.
Lessons Learned
Despite Microsoft releasing the MS17-010 patch in March 2017, many organizations remained
vulnerable. WannaCry proved that unpatched systems are the low-hanging fruit for attackers.
Organizations that had incident response plans, playbooks, and training in place were able to react
more efficiently. Those without plans lost critical time to confusion.
Reliable, offline, and regularly tested backups were the only way to recover without paying ransom.
Backups must be segregated from production environments to prevent encryption.
Organizations with robust logging and SIEM solutions detected the breach faster and could isolate
impacted nodes before full infection. Real-time alerts for unusual SMB traffic or unauthorized
process spawning could have minimized damage.
Flat network architectures helped the worm spread to thousands of endpoints. Simple segmentation
could have slowed or stopped propagation.
Information sharing between governments, CERTs, and private security vendors helped in:
7. IR is a Multi-Disciplinary Process
It’s not just a technical effort — communication with stakeholders, legal departments, customers,
and the public is just as important.
Conclusion
The WannaCry ransomware attack remains one of the most widespread and influential cybersecurity
incidents in history. It demonstrated the global consequences of poor cybersecurity hygiene and
unprepared incident response processes.
For many organizations, this attack was the trigger for creating or maturing their Incident Response
capabilities. It also drove home the necessity of proactive defense — including patching,
segmentation, and threat intelligence.
From a digital forensics standpoint, WannaCry required rapid evidence acquisition, malware analysis,
and cross-organizational collaboration. Today, it is used as a case study in every major cybersecurity
and incident response curriculum, reinforcing a core message:
WannaCry forced the world to think beyond just defense — and into response, recovery, and
resilience.