100% found this document useful (1 vote)
318 views37 pages

Enterprise Security Fundamental

The cybersecurity landscape is complex, with attackers developing new methods daily to compromise systems. Intrusion tools originally created by governments have been leaked online, making them widely available. New vulnerabilities are constantly being discovered and exploited, while vendors work to release updates. However, many organizations do not install updates promptly, leaving vulnerabilities that can be exploited. Further, attackers now find it easier to monetize their activities through ransomware or coin mining malware deployed on compromised systems. This increasingly profitable landscape for attackers is likely to lead to more aggressive cyberattacks.

Uploaded by

gbifiliipi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
100% found this document useful (1 vote)
318 views37 pages

Enterprise Security Fundamental

The cybersecurity landscape is complex, with attackers developing new methods daily to compromise systems. Intrusion tools originally created by governments have been leaked online, making them widely available. New vulnerabilities are constantly being discovered and exploited, while vendors work to release updates. However, many organizations do not install updates promptly, leaving vulnerabilities that can be exploited. Further, attackers now find it easier to monetize their activities through ransomware or coin mining malware deployed on compromised systems. This increasingly profitable landscape for attackers is likely to lead to more aggressive cyberattacks.

Uploaded by

gbifiliipi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 37

The current cybersecurity landscape is complex.

Attackers develop new and ingenious methods


of compromising systems on a daily basis. Intrusion tools, originally developed by the
intelligence agencies of nation states, have been leaked, reverse engineered, and then made
available to anyone clever enough to know where to look for them. New credential breaches
are published on breach notification services, such as haveIbeenpwned.com, every few days.
Exploit frameworks are updated to leverage newly discovered vulnerabilities.

Every month a new set of vulnerabilities is patched by vendors. Security researchers continue
to find vulnerabilities in applications, products, and operating systems. Often vendors are able
to release updates before knowledge of those vulnerabilities makes it to the public. While
vendors are usually diligent in releasing updates to address vulnerabilities, information security
personnel don’t always get around to installing those updates in a timely manner.

In the current cybersecurity landscape, attackers are finding it simpler to monetize their
activities, either by deploying ransomware that encrypts a target’s data and system and
demanding payment for a solution, or by deploying coin mining software that generates
cryptocurrency using the resources of the target organization’s infrastructure. Making a profit
by compromising a target’s infrastructure is becoming easier. This is likely to lead to a more,
rather than less, aggressive cybersecurity landscape.

The current cybersecurity landscape is vast and likely impossible for any one individual to
comprehend in its entirety. There are, however, several aspects of that landscape to which those
interested in the fundamentals of enterprise security should pay attention. These include, but
are not limited to:

 Technology lag
 Application development security
 Skill gap
 Asymmetry of attack and defense
 Increasing availability and sophistication of attack tools
 Monetization of malware
 Automation of Detection
 Internet of Things
 Transition to the cloud
 Increasing regulation

Technology lag
When considering the cybersecurity landscape, it’s important to note that the versions of
products that organizations have deployed exist on a spectrum, with a small number of
organizations running the latest versions, most organizations running older but still supported
versions, and a substantial number of organizations running information systems that are no
longer supported by the vendor.

While the latest operating systems and applications still have vulnerabilities, organizations can
substantially improve their security posture by ensuring that they are running the most recent
versions of operating systems and applications and by keeping those products current with
released updates. It’s also important to note that many vendors are less diligent about
addressing security vulnerabilities that are discovered in older versions of their products. A
vulnerability that may be addressed in the current edition of a product may not be addressed in
previous versions of the product.

It’s usually the organizations running outdated or unsupported products that you hear about
when a large cybersecurity incident occurs. For example, the 2017 WannaCry ransomware
attack disproportionally impacted organizations that had servers running the Windows Server
2003 operating system where the ports that are used for SMB storage protocol were exposed
to the internet.

The WannaCry incident is reflective of a substantive part of the cybersecurity landscape in that
it demonstrated that not only are a large number of organizations running outdated or
unsupported information systems, but that the security configuration of the networks that host
those systems fell far below best practice.

Application development security

The adoption of secure application development practices is another important part of the
cybersecurity landscape. Many application developers create applications that are subject to
attacks including cross-site scripting (XSS) and SQL injection, even though these attack
vectors have been known about and understood for many years. As applications move from
being locally installed on computers and devices to running as web applications in the cloud,
it is important for organizations to ensure that secure application development practices are
followed

Skill gap

It’s regularly reported that the field of information security doesn’t have enough trained
personnel to meet industry needs. The recent Global Information and Security Workforce Study
by the Center for Cyber Safety and Education projected a global shortfall of 1.8 million
information security workers by 2022. Organizations cannot begin to protect themselves from
the various threats that exist, if they aren’t able to hire the personnel to manage and secure their
information systems.

As you will be reminded throughout this course, information security is an ongoing process.
It’s not enough to have a consultant come in, deploy, and configure software and hardware,
and then your organization’s information systems are secure going forward. Instead, the
process of securing information systems is ongoing. For most organizations this means having
IT staff that are trained in information security processes. Until the skill gap is closed, the
cybersecurity landscape will be littered with organizations who are unable to substantively
improve their security posture because they don’t have access to the personnel that would
enable them to do so and existing personnel are overworked due to a shortage of filled
headcount.

Availability and sophistication of attack tools

An adage within the cybersecurity industry is that tools that are only available to the elite
hacking teams of nation state intelligence agencies today will be available to teenage script
kiddies within five years. “Script Kiddie” is a derisive term to describe an individual who
uses sophisticated scripts and applications developed by experts to attack information
systems while having no real understanding of the underlying functionality of those tools. Put
another way, a “script kiddie” is a “point and click” hacker.

Attack tools are increasingly sophisticated. These automated exploit tools are relatively
straightforward to procure and take little in the way of expertise to use. Whereas in the past
access to basic tools required gaining access to select communities on hidden bulletin boards
or Internet Relay Chat (IRC) channels, today it doesn’t take an enthusiastic amateur more
than a few minutes with the results of the right search engine queries to get started. Should
they need to learn more about the tools they have acquired, there are hundreds of hours of
video tutorials available on the web to assist them.

While sophisticated attack tools are available often for free, there is a paucity of similar tools
available for defenders. While the process of launching a basic or even moderately complex
attack against an organization’s information systems may be as simple as a mouse click, the
defender’s process of securing the configuration of those information systems is manual,
complex, lengthy, ongoing and requires a good deal of expertise.

Asymmetry of attack and defense

Within the cybersecurity landscape there is an asymmetry between attacker and defender.
Asymmetric in that the resources required for an organization to be reasonably assured that
they are protected from the vast majority of intrusions vastly exceed the resources required
for a competent attacker to perform a successful intrusion.

One key understanding of the cybersecurity landscape is that the vast majority of attackers
are unsophisticated and are using automated vulnerability scanners and exploit tools. Put
another way, most attackers by volume are likely “script kiddies” rather than professional
hackers. As the vulnerabilities those automated tools attempt to exploit are often already
addressed by vendor updates, if an organization is diligent and applies consistent effort to its
security posture, it will be able to protect its information systems against the common
attacker.

Put another way, if you take an ongoing and systematic approach to securing your
organization’s information systems, it’s reasonably unlikely that “script kiddies” will be able
to compromise your system. A diligent well-resourced defender is likely to be protected
against all but the most highly resourced and persistent attacker.

While there is an asymmetry in terms of the effort required to properly secure information
systems, it is possible to reach a stage where your organization’s systems security posture is
such that those systems are impervious to all but the most skilled and well-resourced
attackers. With time and effort, you can protect yourself against the amateurs, who randomly
attack organizations to see if they can get access. With greater time, effort, resources, and
skill you’ll be able to protect your organization’s information systems against more
competent attackers that deliberately target your organization.

The unfortunate reality is that even when organizations have highly skilled personnel, those
personnel are rarely given the necessary amount of time and resources to ensure that the
organization’s information systems are configured in the most secure manner possible. The
existing problem of asymmetry between attacker and defender is made worse by
organizations not giving their defenders the resources they need to do their job.

Monetization of malware

A big change in the recent cybersecurity landscape is coin mining software. Coin minding
software is software that mines cryptocurrency, such as Monero, Bitcoin, or Ethereum. This
is a big change because in the past it was difficult for an attacker to monetize an intrusion.
Coin mining software makes monetizing intrusions straightforward. An attacker who
successfully deploys coin mining software on a target organization’s information system just
has to sit back and wait for the cryptocurrency to start rolling in.

In the past amateurs may have been motivated to learn how to attack information systems by
a variety of factors including curiosity. With the current mania around cryptocurrencies and
the promise that it may be possible to earn such currency by running freely available exploit
tools, it’s not unreasonable to assume that amateurs will be even more motivated to attack
information systems in the hope of generating income.

Automation of detection

One aspect of the cybersecurity landscape that has become brighter for defenders is that it has
become easier to detect attacks that would have otherwise only been apparent through expert
analysis of information system’s event log telemetry. While some attackers are overt and do
little to hide their presence on the network, competent attackers often spend quite some time
performing reconnaissance once they have established a beachhead on the organization’s
network. These attackers leave only subtle traces of their presence that you might not be
alerted to unless you have sophisticated intrusion detection systems that can recognize signs
of the intruder’s activities. If an organization can detect attackers while the attackers are still
performing reconnaissance, they can reduce the amount of damage done.

In the past Security Information and Event Management (SIEM) systems would analyze
information and detect suspicious activities based on heuristics developed by the vendor.
While these systems are effective in discovering suspicious activity, they are only able to
detect suspicious activity if the vendor recognizes the characteristics of that suspicious
activity. To recognize new types of suspicious activity, the SIEM system must be updated
with new signatures that allow it to recognize the characteristics of that activity.

Cloud-based services, such as Azure Security Center, Azure Advanced Threat Protection, and
Windows Defender Advanced Threat Protection, provide organizations with more effective
threat detection functionality than traditional methods, such as manual telemetry analysis.
These cloud-based services have access to Microsoft’s Security Graph. Microsoft’s Security
Graph centralizes the security information and telemetry that Microsoft collects across all its
sources. This includes telemetry related to attacker activity across all of Microsoft’s
customers, as well as information from Microsoft’s own ongoing security research efforts.

Through machine learning analysis of this vast trove of data, Microsoft can recognize the
subtle characteristics of attacker activities. Once the characteristics of a specific attack are
recognized through analysis of this immense data set, similar activity will be detected should
it occur on customer networks.

The cybersecurity landscape has also changed now that defenders increasingly have access to
tools like Azure Security Center that can highlight and, in some cases, remediate security
configuration problems on monitored information systems. In the past information security
professionals would have to work through configuration checklists when hardening servers,
clients, and other equipment. Today services such as Azure Security Center can provide
recommendations as to what configuration changes should be made to on-premises and cloud
hosted workloads to make them more secure. Security configuration recommendations
provided by these services can also be updated as new threats emerge. This helps ensure that
an organization’s security posture remains up-to-date.

Defenders also have access to breach and attack simulation tools. Rather than relying on
experienced penetration testers to perform red team exercises to locate known vulnerabilities
in an organization’s information systems configuration, breach and attack simulation tools
simulate an attack and locate known vulnerabilities. While such tools won’t find every
possible vulnerability, they are likely to detect the vulnerabilities most often exploited by
attackers. If defenders remediate all vulnerabilities found by such tools, their engagement
with penetration testers performing red team exercises is likely to be more valuable. Using
such tools before engaging a red team will certainly reduce the likelihood of expensive
penetration testers discover a list of obvious configuration vulnerabilities that should have
been found by even the most cursory of examinations. When an organization engages
penetration testers, the hope is that they’ll discover something that the organization’s
information security staff couldn’t have seen, not something that they knew about but didn’t
get around to addressing.

Internet of Things

Another big change in the cybersecurity landscape over the past decade has been the rise of
the Internet of Things (IoT). The IoT. is the network of physical objects, devices, televisions,
refrigerators, home climate systems, cars, and other items, that are increasingly embedded
with electronics, software, sensors and network connectivity that enables these objects to
collect and exchange data. While consumer operating systems, such as Windows 10, OS X,
iOS and Android have increased security features with every release and update, the
operating systems of Internet of Things devices rarely receive long term security update
support from their vendors.

The IoT presents an ongoing challenge on the cybersecurity landscape in that these devices
are likely to remain insecure. This is because even when vendors do provide updates, unless
those updates are installed automatically, few owners of these devices will bother to apply
those updates. While people will apply software updates to their computers and phones when
reminded, most are less diligent when it comes to applying software updates to their
refrigerator, washing machine, or television.
How does this impact the cybersecurity landscape? Botnets, comprised of IoT devices have
already been used to perform distributed denial of service attacks. While the processing
capability of IoT devices is much less significant than that of desktop computers or servers,
it’s likely only a matter of time before an enterprising attacker works out how to get rich
using a botnet of refrigerators to mine cryptocurrency.

Transition to the cloud

The cybersecurity landscape has been substantially altered by organizations moving on-
premise workloads to the cloud. Important to note though is that moving infrastructure,
applications, and data to the cloud doesn’t mean that the responsibility for information
security shifts from organizational personnel to the cloud provider.

As has been amply demonstrated by developers leaving cloud storage containers globally
accessible, the security of a deployment in the cloud is as only as good as it is configured by
the cloud tenant to be. Just as with on-premise information system security, the settings to
secure workloads are present, but they must actually be configured by the information
technology professionals responsible for those workloads.

For example, a cloud storage container used by a major US newspaper to host website code
allowed read access to anyone in the world. Attackers used this access to inject coin mining
code into the web pages delivered by the newspaper to its readers. Each time a reader visited
the newspaper website, some cycles of their computer’s CPU worked on generating
cryptocurrency for the attackers who had modified the contents of the cloud storage
container.

Increasing regulation

A final aspect of the cybersecurity landscape that is worthy of attention isn’t strictly
technology related, but instead relates to regulation and legislation. For many years the
information technology industry was left to its own devices when it came to how much
energy they put into protecting information systems infrastructure. Unfortunately, the
industry hasn’t been successful enough in containing such breaches. The public and
eventually politicians have noticed that breaches continue to occur even as all of us move
more of our lives and sensitive information online.

This has led an increasing number of jurisdictions to introduce legislation and regulation
mandating the security controls that should be present over certain types of data hosted in
organizational information systems. The cybersecurity landscape has changed in that IT
security staff need today not only to be conversant with the security controls available for the
technologies they are responsible for managing, but also with the rules and regulations that
apply to the organization’s information systems and responsibilities that must be upheld in
the event that an intruder successfully breaches the organization’s systems.

Overview

In the best of all worlds our organization’s information systems are in a pristine state when
we start implementing security controls. In this model, intrusions are something that exist as a
future possibility rather than something that may have happened before you started thinking
about how to secure your organization’s information systems.
The assume compromise philosophy takes the position that an organization should build and
maintain its security posture based on the idea that the organization’s information systems
have already been compromised. Another part of the assume compromise philosophy is that
the organization should assume that preventative technologies such as firewalls, anti-virus,
and intrusion detection systems (IDS) will fail. Under the assume compromise philosophy,
information security teams focus instead on detecting and responding to suspicious activity
rather than simply preventing intrusion. Detection of suspicious activity can be assisted by
leveraging cloud-based analytics services that constantly monitor information systems
telemetry for anomalies.

When you design a security posture with assume compromise in mind, you restrict an
attacker’s ability to move laterally between information systems and to restrict their ability to
escalate privileges within those systems. These goals can be done by implementing
technologies such as Just Enough Administration (JEA) and Just in Time (JIT)
administration, segmenting networks, deploying code integrity policies as well as enforcing
good administrative practices as restricting administrative sessions so that they can only be
initiated from specially configured privileged access workstations.

Compromise examples

Few attackers compromise an organization without having an objective beyond proving that
the organization can be compromised. Attackers target organizations because they wish to
accomplish one or more goals. When an organization is compromised, the attackers often do
one of the following:

 Exfiltrate data
 Deploy ransomware
 Enroll systems in a botnet
 Deploy coin mining software
Data exfiltration

The attackers extract sensitive data from the organization. This data may have been stolen for
a variety of reasons, from the theft of commercially sensitive information to exposing
organizational secrets to damage the organization’s reputation. Some of the most famous
attacks have involved data exfiltration, such as gaining access to a substantial number of
customer credit card numbers.
Ransomware

In ransomware attacks, the attackers encrypt the organization’s data and render the
organization’s information systems non-functional. The attackers do this in the hope that the
organization will pay a ransom, usually in the form of a cryptocurrency. Once the target
organization pays the ransom, the attackers will provide the organization with an unlock key.
After inputting this key, the data will be decrypted and the information systems previously
rendered non-functional will be returned to full functionality.
Botnets

Botnets are collections of computers that can be configured to perform a specific task, such as
performing a distributed denial of service attacks. Botnets can be monetized in several ways,
including extorting money through the performance of distributed denial of service attacks or
used to relay spam (unsolicited commercial email).
Coin mining attacks

As of early 2018, coin mining attacks are becoming increasingly prevalent due to their
lucrative nature. Coin mining malware deployed in attacks is sophisticated enough only to
use some, not all, of the host systems resources, meaning it isn’t always obvious when a
system is infected. Coin mining attacks have also been perpetrated by insiders who use their
organization’s infrastructure to generate illicit income.

Lesson Review
Thoughtful pause

Share your thoughts in the course forum on the following topics.

 What systems does your organization have in place to detect suspicious activity on the
network?
 Are there other examples of compromise that you can think of?

Overview

The cost of a breach is always an estimate. Even after a breach occurs, the actual cost of the
breach may never be accurately determined. On top of the disruption to the businesses
processes, it is difficult to assess the value of intangibles such as reputational damage, the
cost of rehabilitating compromised systems, the cost of investigating the breach itself and the
cost of any fines or penalties that may need to be paid to the relevant authority.

Some of the factors that contribute to the cost of a breach include, but are not limited to:

 Breach investigation
 Systems rehabilitation
 Reputational damage
 Destruction of assets
 Compliance costs

Breach investigation

After the attacker has been successfully ejected from the organization’s information systems,
an organization should perform a thorough investigation to determine as much as it can about
the particulars of the breach. Performing this investigation will cost the organization as it
takes personnel away from their day to day work tasks. It may also be necessary to bring in
outside expertise so that the full extent of the compromise can be ascertained, which will
again cost money and time.

The benefit of these costs will be that the organization has a clear picture of how the breach
occurred, how long the intruder was present within the organization’s information systems,
and the steps that can be taken to ensure that attackers will not be successful leveraging
similar techniques in the future
Systems rehabilitation

Once the attacker has been successfully ejected from the organization’s information systems,
it’s then necessary to ensure that those systems are rehabilitated. Not only is it necessary to
remediate the vulnerabilities that allowed the attacker to compromise the system, it is also
necessary to ensure that any modifications that the attacker may have made to the system are
located and removed. Rehabilitating a system isn’t just a matter of reverting to the last
backup as it may be that the attacker compromised the system some time ago. Reverting to
the last backup won’t remove the tools that the attacker placed on the system to retain
persistence if those tools have been included in the system backups for some time. In many
cases the only way to ensure that a system is rehabilitated is to deploy it again from the
beginning and then address the vulnerabilities that allowed the attacker to gain access.

Reputational damage

Sometimes the biggest cost of a successful breach is to reputation. Reputational damage


doesn’t just occur when sensitive internal documents are leaked to the media. For example,
consider an ecommerce site that suffers a breach where customer payment information is
compromised. Customers of the site may be wary of using the site again in the future,
especially if they’ve had to cancel an existing credit card as a result of the breach. When
customers lose faith in an organization’s ability to protect their information, they are less
likely to interact with that organization.

Destruction of assets

Some attackers plant malware that is designed to destroy the systems of the target
organization. Some malware works by reconfiguring hardware to work beyond its safe
specification. For example, overclocking a processor until it overheats and fails. Other
malware erases data on target systems or renders them inoperable. In some cases, the
malware is deployed deliberately, destroying sensitive systems either to inflict financial
damage or as a way of forcing the target organization’s information systems to become
inoperative.

Compliance costs

Another change in the cybersecurity landscape in recent years has been how regulation has
encroached on the industry. Depending on the type of breach that occurs and the type of
industry the target organization is in there may be fines that must be paid to specific
authorities as well as investigations and reports that must be generated, all of which cost
money and other organizational resources. In some cases, an organization that suffers a
breach may be subject to ongoing reporting requirements for a period of several years. In
some jurisdictions this can include paying for periodic external audits to ensure that the
organization has correctly implemented the necessary security controls to minimize the
chance of a similar breach occurring in future.
Overview

Red team versus blue team exercises involve the simulation of an attack against an
organization’s information system. The red team simulates and, in some cases, performs
proof of concept steps taken in the attack against the organization’s IT systems. The blue
team simulates the response to that attack.

This adversarial approach not only allows for the identification of security vulnerabilities in
the way that the organization’s IT systems are configured, but also allows members of the
organization’s information systems staff to learn how to detect and respond to attacks.

Red Team

At a high level the red team plays the role of an attacker against the organization. Red teams
can consist of people inside the organization, an external penetration testing team, or a mix of
both.

A red team exercise often involves a proof of concept demonstration that the vulnerabilities
that they have found are practically, rather than theoretically, exploitable. For example,
proving domain dominance by creating accounts that are members of the Domain Admins
group in an Active Directory environment. Or showing control of an individual machine by
creating an account with local administrative privileges on a standalone system. When
defining the objectives of the exercise, it is important to clearly define as to what counts as a
red team victory rather than allowing ambiguity about whether vulnerabilities exist when
performing the post-mortem exercise.

Blue Team

The blue team play the role of your existing information security and IT administration staff.
The aim of red team versus blue team exercises is both to determine if vulnerabilities are
present in the existing security configuration as well as to train organizational staff how to
detect and respond to attacks against organizational IT infrastructure. You’ll learn more about
the role of a blue team and how to construct an effective blue team in the next module.

As a part of your ongoing security preparations, you should rotate members of staff between
red and blue teams when conducting subsequent exercises. This allows your staff to learn,
develop, and appreciate both the attacker and defender mindsets.

Exercise structure

Initial exercises should be white boarded as a role-playing exercise. This allows both the red
and blue teams to develop a good understanding of the parameters of the exercise. Without a
strict understanding of the parameters of the exercise, red team and blue team exercises can
quickly spiral out of scope.

Later exercises should move beyond white boarding and role playing to practical proof of
concept. In these later phases the red team’s activities should never place the information
systems of the target organization at risk. A red team shouldn’t need to deploy coin mining
malware on a domain controller to demonstrate that the domain controller is vulnerable to
attack. There are other, less deleterious ways of making this point, such as installing a
harmless application on the server. Installing a harmless application demonstrates the ability
of the attacker to install software on a sensitive server, which is all that the red team needs to
accomplish, without going to the point of having every domain controller running coin
mining software.

The overarching aim of the red team is to provide a proof of concept that the target
organization is vulnerable to a specific type of attack. The overarching aim of the blue team
is to be able to detect and respond to that attack in an effective manner.

Management Approval

It is critical that management be kept informed of red team versus blue team exercises,
especially when those exercises move beyond role playing and whiteboarding to taking
actions that directly impact infrastructure. For example, it is possible that infrastructure
functionality might be disrupted by the exercise. Management should approve of the exercise
goals and be made aware of what achieving those goals means in terms of modification to
existing information systems. Management should also be involved when engaging red teams
that are external to the organization in penetration testing exercises.

Internal versus external red teams

When initially working on your organization’s security configuration and incident response
strategies, you might choose to start with internal red and blue teams. You could continue to
run exercises pitting red team against blue team until a point is reached where the security
configuration of the organization is at a level where it is beneficial to subject it to a
professional penetration testing attempt. It’s likely that until several red team versus blue
team exercises have been run, there will be obvious and potentially embarrassing holes in the
existing security configuration.

A disadvantage of having members of the red team be exclusively from within the
organization is that they will bring some of the organization’s assumptions with them.
Outsiders bring their own assumptions, and systems that a member of the organization from
the Red Team might believe to be so secure that they are unassailable may have faults that
are obvious and exploitable for someone approaching those systems from outside the
organizational mindset.

As reputable professional penetration testers will have extensive experience and knowledge,
they are likely to find vulnerabilities that might not be apparent to security engineers who
haven’t explicitly specialized in organizational penetration.

As red team versus blue team exercises should be ongoing, many organizations use internal
teams for most exercises, punctuating with occasional exercises where the red team is made
up of professional penetration testers who specialize in this type of exercise. This allows for
the organization’s security configuration to be periodically exposed to people that aren’t
inculcated in the organizational security culture.
Overview

When developing a red team versus blue team exercise it is important to specify the red
team’s objective. The objective is the overall aim of the exercise and red teams may have
more than one objective in an exercise. When organizations are starting out with red
team/blue team exercises, they should limit objectives so that the exercise does not become
overly complicated. Once the red team/blue team exercises become more established, the
outcome of exercises with more complex and difficult objectives will become clearer. If both
the red team and blue team are inexperienced and the red team is pursuing a set of complex
objectives, it will be difficult to conclude whether the organizational infrastructure is indeed
secure, or if the red team wasn’t organized or capable enough to be able to exploit
vulnerabilities.

Attackers, and red teams, can have more than one objective in an exercise. When engaging
red teams as penetration testers from outside the organization ensure that the objectives are
clearly stated before the exercise begins. A red team/blue team exercise is different from a
security audit, where an external group of penetration testers examines an organization’s
configuration and generates a report detailing vulnerabilities and problems.

There are a set of common objectives that attackers pursue:

 Persist presence
 Steal data
 Hackstortion
 Ransomware
 Coin miners
 Destroy systems

Persist presence

When an attacker can persist their presence on a target organization’s information systems, it
means that they have reliable remote access via a back door to the target organization’s
systems. This compromised system is also termed a beachhead or foothold as it is the initial
location through which the attacker gains access to the target organization’s network.

Rather than executing the attack the moment that a foothold has been reached, competent
attackers often set up the digital equivalent of a base camp from which they are able to
reconnoitre the target organization’s infrastructure and systems. Many attackers spend
months examining a network to determine what existing security and monitoring systems are
in place before they begin to take the actions that will achieve their objectives.

Once a beachhead is established and the attackers have an accurate picture of the target
organization’s infrastructure, the attacker can then upload and deploy their exploit toolkit to a
location on the target organization’s network. The attacker can then use the tools in this
toolkit to move laterally across the target organization’s network, compromising further
system and elevating privileges.

Steal data
One of the oldest types of attacks is the theft of data. In the case of the 2013 Target data
breach, attackers were able to successfully exfiltrate credit card data from the merchant and
sell those credit card numbers on the dark web. Other well-known breaches involving the
stealing of data have involved the internal communication of political parties that have later
been publicly released as a method of discrediting the authors of that communication.

There are a variety of methods that can be used to steal data, from being able to extract
information from databases using SQL injection attacks, through to the exfiltration of entire
virtual machines when attackers gain control of virtualization infrastructure, export
production virtual machines, and then upload the exported virtual machine files to the
internet.

Hackstortion

Hackstortion is a term for the process that occurs when an attacker compromises a target’s
network and then requests payment for a specific action to be taken. This action might be for
the attackers to destroy sensitive data they exfiltrated rather than exposing that data to the
public. Another action might be to return command and control target organization’s
infrastructure to the original owner. Hackstortion can include data theft, though specifically
involves a financial demand being placed on the organization, rather than having the data
sold or released to the public without such a demand being made. The red team might
simulate an attack where hackstortion is the objective pursued, either by exfiltrating data or
taking control of the target organization’s infrastructure as proof that the organization was
vulnerable to this approach.

One recent example of hackstortion occurred when a group of attackers compromised the
information systems of a popular television production company and threatened to release
digital copies of unaired episodes of popular shows to file sharing sites unless an extortion
payment was made. Another example of hackstortion involved attackers who attacked dating
sites, extracted personal data, and then threatened to expose that personal data to the public
unless payments were made, or certain actions were taken.

Yet another form of hackstortion occurs when administrative accounts of cloud service
providers are compromised. When this occurs, the attacker threatens to delete all
infrastructure hosted in the account unless a ransom is paid within a certain short period of
time. This period usually being less time than it would take for the attacked organization to
recover administrative control through the cloud service provider’s support mechanisms.

Ransomware

Ransomware, also known as cryptoware, encrypts files and sometimes entire operating
systems so that they are inaccessible unless a special decryption key is provided. The
attackers will provide a decryption key that can be used to recover the encrypted systems for
a fee, usually in a cryptocurrency like BitCoin. The red team’s goal might be to install
ransomware as a method of demonstrating that the organization’s infrastructure was
vulnerable to this attack.

Ransomware is effective because many organizations do not have comprehensive data


backup and recovery strategies. Organizations are faced with the choice of losing almost all
their data or paying the ransomware fee to have the data readily recoverable. Recent surveys
indicate that approximately 60% of organizations suffered some form of ransomware attack
in 2016. Reports also indicate that ransomware can be very lucrative to the attacker, which is
one reason why ransomware attacks have become more prevalent.

Coin Miners

Coin mining malware is software that is used to perform calculations associated with crypto-
currencies such as BitCoin. Rather than run coin mining software on their own infrastructure,
with its attendant costs in hardware and electricity, coin mining attacks involve attackers
running crypto currency mining software on the infrastructure of the compromised
organization. The red team’s goal in an exercise might be to install coin mining malware,
simulating this type of attack.

The payoff for the attacker is that they can generate crypto currency using the compromised
infrastructure, with the attacked organization providing CPU resources. Another advantage of
this type of attack is that unless an organization has a comprehensive and effective
monitoring solution, it’s possible for the coin miners to run quietly in the background
generating income for the attackers for some time without the target organization becoming
aware that anything is amiss.

Destroy Systems

The objective of some attackers is to destroy the infrastructure of the target organization. This
is possible because certain types of malware can execute code that causes harm to storage,
memory, CPU, and networking hardware devices. This code functions by pushing these
devices beyond their tolerances; for example, causing memory or CPU to overheat and fail.
This type of attack has also been used by state actors against industrial equipment; for
example, when Stuxnet was used to attack centrifuges in Iranian nuclear facilities.

Overview

Kill Chains are an idea originally taken from military strategy, which describe the structure of
an attack against an objective. The company Lockheed Martin applied this idea to
information security and it is now used as an industry standard framework for describing the
progression of attacks against information systems.

Reconnaissance

Sophisticated attackers don’t randomly attack organizations. Sophisticated attackers spend a


significant amount of time researching their target. An attacker will use the reconnaissance
phase to determine whether a target is worth attacking, the objectives of an attack, and the
characteristics of the target.

For example, an attacker might spend time examining LinkedIn to determine which staff hold
specific roles within an organization. Before they’ve taken any overt action against the target
organization, a sophisticated attacker may have a detailed understanding of the structure of an
organization’s information systems and security teams. Using tools like LinkedIn, not only
could an attacker determine who the senior information systems staff are, but they’d also be
able to deduce the likely nature of those systems based on the experience of the staff in
question. An organization whose IT department is staffed by professionals that hold an
extensive variety of Microsoft certifications likely uses Microsoft products. An organization
whose database administrators who all have extensive prior experience working with a
specific database product such as MySQL are likely using that product to host their
production databases.

During the reconnaissance phase, attackers examine external services, such as web
applications provided to customers, web sites as well as email and DNS to determine the
characteristics of those services. For example, does an organization host its email
infrastructure in Office 365 or is it using an on-premises solution? The answers to these
questions will determine the attacker’s strategy as they progress through the kill chain.

An internal red team is at an advantage because they already know which information
systems are in use at the organization. An exercise that an internal red team might engage in
is trying to ascertain, from external sources such as LinkedIn, as well as other public
information, such as DNS registration records and passive monitoring, exactly how much
information about the nature of the organization’s information systems could be determined
by a diligent investigator who only had access to external sources of data.

Weaponization

Weaponization involves creating, or selecting existing, remote access malware. This


malware, when deployed, will allow the attacker to gain a foothold or beachhead in the target
organization. The selection of malware will be determined by information gained during the
reconnaissance phase and will target vulnerabilities that are likely to exist within the target
organization’s information systems infrastructure. For example, the malware selected for
attacking a website will be substantially different if the organization’s website is hosted on
IIS with a SQL Server backend compared to a website hosted on Apache with a MySQL
backend. The better tailored the malware or exploit is to the target organization, the more
likely it is to succeed.

Delivery

The delivery phase involves having the target of the attack execute the malware on the target
organization’s information systems infrastructure. Some attacks require user intervention for
the remote code to execute; other attack types can be performed remotely.

There are a variety of delivery methods that may be leveraged to meet the objectives of the
delivery phase that include, but are not limited to:

 Phishing attacks
 Crafted file attacks
 Remote code execution
 Watering hole attacks
 Found USB stick attack
 Exposed VPN credentials
Phishing attacks

A phishing attack uses a specially crafted email sent to users in the hope that they will open
the email. Depending on the sophistication of the attack, the user may have to click on a link
to trigger the next stage of the attack. There are several varieties of phishing attack that
require differing levels of user interaction. Simply opening the email may, in some scenarios,
trigger remote code execution. Clicking on a link in the email may download remote code
that executes directly on the target’s system or may take the user to a website, which triggers
remote code execution.

Another common form of phishing attack involves phishing of credentials. In this type of
attack the target user is sent an email that looks legitimate, asking them to navigate to a site
where they need to sign in with their organizational credentials to perform a task. For
example, an email reminding the user that they must change their email account password
that directs them to a site that has been configured to look the same as their normal webmail
site. Unless the user is paying attention, it is possible that they may enter their credentials,
which end up being harvested for later use by the attacker.

Crafted file attacks

In this type of attack a specially crafted file is emailed to a target user. This file, when
opened, executes malicious code that installs the attacker’s software on the recipient’s
computer. If the file is crafted well enough, or the configuration of the user’s computer
allows untrusted code to run, it’s possible that simply opening the document will trigger the
execution of the attacker’s code

Remote code execution

This type of attack involves sending specially crafted data to an information system, such as
an application or service running on a server. For example, sending specially crafted traffic to
computers running the obsolete SMB1 storage protocol can allow attackers to execute code
on those computers. Other types of remote code execution vulnerabilities allow attackers to
inject code into a remote system’s memory and have the system execute that code.

Watering Hole attack

An attack where malware is planted on an insecure site that people at the target organization
are known to frequent. For example, people at a specific organization may be patrons of a
specific golf club. By compromising and planting malware on the golf club’s website it’s
possible that the malware hosted on the website might be downloaded and installed on work
computers when people within the organization visit the website during office hours.

Found USB stick attack

In this type of attack, USB sticks are dropped casually on the ground outside the front
entrance of the building or in areas outside the building where employees are known to
frequent, such as the area used for cigarette breaks. Some of the employees will plug these
USB sticks into their work computer, which allows the malware to be installed on that
computer giving the attacker internal network access.
Exposed VPN credentials

Credential breach websites, such as Troy Hunt’s HaveIBeenPwned.com, provide users with
notifications when websites where they’ve created accounts suffer data breaches, which
indicate that the credentials of those accounts have been compromised. Those that have
investigated the properties of these data breaches have found that many users of third party
websites sign up to those websites using work, rather than private, email accounts. As many
users that have poor information security practices are likely to use the same password for
their work account as they do for third-party websites, attackers that get access to breach data
potentially have access to the work credentials of the users that signed up for the breached
sites. Of the work credentials that are exposed when sites account databases are breached, it
is not unreasonable to assume some will work with organizational VPN systems. So, it is
possible that some attackers will gain access to an organization through a VPN because a
person within an organization signed up to an external website using their organizational
email address and password and those credentials were later exposed.

Exploitation

In this phase, the attacker’s malware code successfully triggers, leveraging the targeted
vulnerability. Depending on how well the attacker was able to ascertain the properties of the
target information systems, this may occur quickly or may take several tries before the code
successfully runs.

Installation

In the installation phase, the original malware code is leveraged to deploy an access point,
also known as a back door, through which the attacker can access the compromised
beachhead system. This usually occurs through the original malware code downloading and
running exploit tools remotely, which eventually provide the attacker with a remote access
point into the target organization’s network.

Command and Control

In the command and control phase, the attacker has achieved persistent access to the target
organization’s information systems. In reaching this phase the attacker will likely have
leveraged the following:

 Lateral movement
 Privilege escalation
 Domain dominance

Lateral movement

It is highly likely that the first system that an attacker compromises isn’t the one that allows
the attacker to achieve their objective. Lateral movement is where an attacker begins to
compromise other systems on the network, increasing the number of compromised systems as
they move laterally towards accomplishing their goal.
An example of lateral movement might be where a member of the accounting team responds
to a phishing email and has malware installed on their computer. This malware can extract
the cached credentials of a member of the organization’s first level support team as well as to
provide the attackers with remote access to the target organization’s network. The attacker is
then able to use the credentials of the first level support team to gain access to other systems.
In doing so the attackers eventually can capture the credentials of a member of the domain
administration team and they are able to leverage these credentials to gain domain
dominance.

Privilege escalation

Privilege escalation is the process of an attacker leveraging a compromised unprivileged


account, such as that of a standard user or service, into control over an account that is able to
perform actions beyond those original privileges. In the previous example the attacker was
able to start with access to the computer of a user with no administrative privileges. By
running specially crafted software, they were able to capture the credentials of a user that had
greater network privileges. Once they had access to these privileges, the were then able to
eventually escalate until they had full administrative permissions.

Domain dominance/Administrative privilege

A common goal of the command and control phase is to get administrative privileges, also
termed “root privileges,” on the target organization’s information systems. For example,
control of an organization’s domain controllers provides domain dominance. Once an
attacker has control of an organization’s domain controllers, they most likely can perform any
action that they desire on the network. There are exceptions to this rule, but they require
separation of administrative privileges and the deployment of technologies such as Just in
Time and Just Enough Administration

Actions on Objective

In this phase, the attacker, or red team in the exercise, carries out its objective. As mentioned
earlier, this could be to steal data, deploy ransomware, deploy coin mining software, extort
the organization, or destroy systems. The Actions on Objective phase is the attacker’s
endgame.

Overview

A substantive difference between a properly functioning red team and penetration by an


attacker of nefarious intent is that as part of the penetration process, the Red Team is
documenting the vulnerabilities that they find in the systems that they are attacking. This will
allow the organization to remediate those vulnerabilities after the exercise concludes so that
the organization is no longer vulnerable to that specific set of vulnerabilities.

The red team should also ensure that any modifications that they make to the organization’s
information systems during the exercise can either be rolled back or remediated by
implementing a better security configuration. Overall success in the exercise will mean that
the red team will have to institute completely different steps in their kill chain when the next
red team versus blue team exercise occurs, because the issues raised by the previous exercise
will all have been addressed.
Blue team role

The blue team represents and is comprised of your organization’s existing information
security and IT administration staff. While part of the purpose of red team exercises is to
explore how an organization is vulnerable to digital infiltration by an external attacker and to
remediate those vulnerabilities, another important part of red team exercises is to train
organizational staff on how to detect, investigate, and respond to attacks against the
organization’s information systems.

Red team exercises function as a practical drill for an organization’s existing information
security and IT administration staff. They also function as a practical drill for an
organization’s existing security response policies and procedures. Just as a disaster recovery
drill tests the adequacy of an organization’s disaster recovery policies and procedures, a red
team exercise tests an organization’s security incident response policies and procedures.

Blue team goals

When conducting a red team / blue team exercise, the blue team has several overarching
goals. These include:

 Stopping the red team from successfully achieving its goals. The best blue team
outcome is to block the red team from gaining a foothold in the target organization.
Depending on how this scenario plays out, it could be because the organization’s
existing security posture makes it extremely difficult to digitally infiltrate. However,
it is important to note with this outcome that just because the organization wasn’t
infiltrated this time doesn’t mean that vulnerabilities don’t exist in the organization’s
security configuration or incident response policies, it just means that the red team
wasn’t able to successfully exploit them this time. One response when this goal is
achieved is for the organization to engage with a new and separate organization to
provide red team penetration testing services for the next red team exercise. The new
organization may have a red team approach that exposes vulnerabilities that weren’t
uncovered by the previous red team.
 Early detection and effective response to red team activities. When this outcome
occurs, the blue team quickly detects and responds to red team activities. While the
red team makes some progress towards its goals, the blue team has enough
information to detect and respond to their activities and to evict the red team from the
target organization’s information systems.
 Post-exercise report. This report should detail blue team successes and failures.
Independent of the outcome, this report will assist in improving the processes that the
internal teams follow when a real, rather than simulated, attack occurs. It also gives
members of the blue team a formal chance to reflect on what they did well and what
they could do better. For example, if a bottleneck occurred because event logs from a
system were not accessible to the investigators during the exercise or the investigators
missed critical evidence in the event logs, the report would highlight this problem.
 Revise the incident response strategy. The outcome of red team exercises shouldn’t
only involve remediating hardware, software, and configuration vulnerabilities in an
organization’s security configuration, but procedural vulnerabilities in the way that
personnel respond to the attack simulation. The incident response strategy provides
organizations with a formal process for responding to incidents. This goes beyond the
phases of the blue team’s kill chain and will include what responses at an
organizational level, for example when it is necessary to notify external stakeholders
about a potential breach, are required. Based on the results of the red team exercise, it
may be necessary to adjust the incident response strategy so that the organization is
more effectively able to respond to future incidents.

Red team gains complete dominance of the network. The worst outcome from the perspective
of the blue team and indicative that the current information systems configuration and
incident response policies need revision and remediation.

Overview

In the information security lexicon, a kill chain describes the structure of an attack against an
objective. While usually used to describe the phases of a red team’s operation, it’s also
common in the information security literature for blue teams to have their own kill chain.
Rather than describing the structure of an attack against an objective, the blue team kill chain
describes the phases of detecting and responding to an organizational attack. Although there
are a variety of different kill chain phases discussed in the information security literature,
blue team kill chains generally include the following phases:

 Gather baseline data


 Detect
 Alert
 Investigate
 Plan a response
 Execute

Gather baseline data

Having adequate amounts of baseline data allows you to understand what your environment
looks like when it is not under attack. It is difficult to know what is unusual for your network
unless you have a good idea what usual looks like. To analogize, it’s easier to find needles in
a haystack, if you have an extremely good understanding of the characteristics of a haystack
that has no needles present.

Gathering good baseline data means configuring effective logging, monitoring, and auditing
for your organization. When configuring how you will collect baseline data, consider
enabling all auditing and logging options. The more telemetry that you have, the better
picture you’ll be able to generate of what normality looks like in your organization’s
environment. If you haven’t configured all telemetry options, it is possible that you won’t
have a clear enough picture that will allow you to accurately distinguish normal from
abnormal activity. Collect telemetry over a sustained period that represents your
organization’s normal operations. Baseline data should also be regenerated as changes are
made to information systems on the network, so it reflects the current operation of the
network, rather than only representing the organization’s information systems as they existed
at a fixed point in the past.

Detect

Detecting an intruder is often a case of noticing abnormal activity on your organization’s


information systems. For example, one may notice that a server, where for the last few
months connections via remote desktop protocol (RDP) have only been made during business
hours is suddenly servicing RDP requests late at night on weekends or where a computer is
transmitting unusually large amounts of data to hosts on the internet where previously the
amount of traffic it transmitted was negligible.

Detection can be difficult as competent intruders will attempt to leave minimal trace of their
activities in the telemetry logs of your organization’s information systems. Rather than
detecting abnormalities by manually examining event logs, many organizations today rely
upon Intrusion Detection Systems (IDS) and Security Information and Event Management
(SIEM) systems to identify suspicious anomalies in the telemetry generated by information
systems.

Alert

When does a series of unusual events correlated across multiple logs reach the stage of being
worthy of further investigation? Correlation with other events is important. A series of failed
attempts at remote RDP access by themselves are suspicious, but don’t indicate a problem. A
series of failed attempts at remote RDP access, a successful remote logon via RDP, and then
suspicious failures of the lsass.exe service occurring in succession is worthy of investigation.

Alerting is the process of bringing suspicious anomalies in the telemetry generated by


information systems to the attention of the blue team. It is important though that the members
of the blue team tune their IDS and/or SIEM systems to provide an appropriate level of
alerting. If an alert system provides too many false positives, that is it triggers too many alerts
that aren’t associated with attacker or red team activity, then the blue team may miss an alert
that is associated with red team activity through alert fatigue. For example, during a recent
breach at a famous retailer, alerts were generated by the retailer’s internal monitoring about
the attacker’s activity but were discounted at the time as false positives because the internal
systems generated so many alerts for innocuous activity that it wasn’t clear to the security
team whether any individual alert indicated a problem or misclassified routine events.

Some IDS and/or SIEM systems will provide recommendations as to which activity requires
further investigation and may even suggest further ways to find evidence to validate the
hypothesis that an intruder is present within organizational systems and that the organization
is under attack.

Investigate

Once the blue team has verified the presence of an intruder on the network they need to
determine the degree to which the intruder has infiltrated the network. A detailed and
thorough investigation should determine which systems the intruder has compromised, when
those systems were compromised and how those systems were compromised. These steps are
important because the scope of many intrusions often exceeds the initial assessment of the
severity of the intrusion. Only by understanding where, how, and when systems were
compromised is it possible to begin to effectively remediate vulnerabilities that led to the
compromise and to achieve the goal of ejecting the intruder from the organizational network.

Plan a response
Organizations shouldn’t attempt to evict an intruder until they have a good working
understanding of the topology of the intrusion. Similarly, the method through which an
intruder is evicted, and vulnerabilities remediated should be planned rather than executed in
an ad hoc manner.

The red team most likely has fallback strategies. A well-planned response counters attacker
fallback strategy. A purely reactive response can turn into “whack a mole” where the attacker
has a counter move up their sleeve, including becoming stealthier to make it seem as though
they have been evicted to the network when what they’ve done in reality is moved laterally to
a new compromised host and temporarily ceased activities while they wait out the blue
team’s countermeasures.

From an organizational perspective while time is of the essence in terms of evicting the
intruder, in most real-world situations the intruder is only detected long after they have
infiltrated the network. This means that it’s unlikely that substantively more harm will occur
in the time it takes for the blue team to formulate an effective response than would occur if
the blue team responded in an immediate and ad hoc manner.

Execute

During the execution phase, the blue team enacts the response plan to evict the intruder from
the organization’s information systems and to remediate the vulnerabilities in the security
configuration that the intruder leveraged when infiltrating the network. If completed
successfully, the intruder will no longer be present within the organization’s information
systems and the process of performing a more detailed post incident analysis can occur.

Overview

Privilege escalation is the process by which an attacker acquires the ability to perform a
greater variety of tasks on the organization’s information systems from those that they were
able to perform when they gained an initial beachhead on the network. An example of
privilege escalation would be for the attacker to start with access to the credentials of a
standard user account and to use a variety of techniques to end up with local administrator or
greater privileges. The end goal of privilege escalation is to acquire full administrative
privileges. In an Active Directory environment this would be the equivalent of the attacker
gaining domain admin privileges.

Restricting privilege escalation is about limiting the ways in which an attacker can take a
compromised unprivileged account. Methods of reducing the probability of privilege
escalation include:

 Privileged access workstations


 Just enough administration
 Just in time administration
 Restrictions on administrative accounts

Privileged access workstations


A privileged access workstation (PAW) is a computer that is only used to perform
administrative tasks. This computer has a locked down configuration compared to computers
used for day-to-day activities on the network. PAWs have the following characteristics:

 Access is limited to staff that perform administrative tasks. PAWS are specially
locked down computers that should only be used for administrative tasks. PAWs
should be able to connect to sensitive servers on your organization’s network but
should be unable to browse the internet or perform non-administrative tasks, such as
responding to email. Administrative accounts used to manage sensitive servers should
be configured so that they can only be used on PAWs and not on typical end user
computers used for day-to-day organizational tasks.
 Restrictions on software that can run on the PAW. The software configuration of the
PAW is hardened so that only specifically authorized software can run on the PAW.
This means that malware that might be deployed on the PAW to capture the
credentials of an administrator or to elevate privileges will be unable to run because it
will not be on the list of applications of scripts that are specifically authorized for the
PAW. Windows Defender Device Guard and Windows Defender Application Control
are technologies that you should deploy on PAWs to control code that can be
executed on the computer.
 Protected by secure technologies. PAWs are configured with secure boot, BitLocker
and technologies including Credential Guard. This reduces the chance that malware
can take control of the computer during the boot process. Credential Guard is a
technology that protects credentials stored on the computer by storing them in a
special virtualized container that is only accessible to authorized processes within the
operating system. Credential Guard minimizes the chance of successful pass-the-hash
or pass-the-ticket attacks.

Just enough administration

Just Enough Administration (JEA) allows organizations to create special PowerShell


endpoints that limit which PowerShell cmdlets, functions, parameters, and values can be used
during a connection to the endpoint. Rather than having to use a specially configured
administrator account to perform an administrative task, just enough administration allows
for a standard user account to leverage a special virtual account when connected to the
PowerShell endpoint. JEA minimizes the chance of privilege escalation by allowing standard
accounts to perform extremely limited privileged tasks only when connected to specific
PowerShell endpoints.

Just in time administration

Just in time administration is a technology where administrative privileges are provided only
for a limited amount of time. When not granted administrative privileges, accounts only have
standard user privileges. It is also possible to have those limited time privileges only granted
subject to approval by another person. Just in time administration makes privilege escalation
difficult because privileges are time limited, subject to request and approval where necessary,
and can be limited in scope. Just in time administration can be combined with JEA.

Restrictions on administrative accounts


One way of limiting the possibility of privilege escalation is by restricting where
administrative accounts can be used. For example, only allow administrative accounts for
sensitive servers to be used on PAWs or those sensitive servers, do not allow those accounts
to be used to sign on to servers or workstations that aren’t sensitive. You can also configure
sensitive administrative accounts so that they can only be used at certain times of the day.

In highly secure environments, administrative accounts can be further limited by


implementing an Enhanced Security Administrative Environment (ESAE) forest. In this
model, the only accounts with administrative privileges in the production forest are standard
user accounts that are stored in the privileged forest.

The production forest has a one-way trust relationship with the privileged forest. This means
that accounts from the production forest cannot interact with the privileged forest. An
attacker that compromises an account in the production forest cannot elevate privileges as
that would require the ability to create or modify accounts stored in the privileged forest,
which is impossible because the privileged forest does not trust the production forest.

Overview

Lateral movement occurs when an attacker who has compromised one system is able to
compromise another system on the network by using an existing compromised system as a
jump off point. For example, a standard user’s workstation is compromised, and the attacker
runs a tool to extract locally cached credentials. One of these sets of cached credentials
allows the attacker to gain access to a file server. Once the attacker gains access to the file
server, cached credentials stored on that server give them access to a domain controller.

There are a variety of methods that you can use to restrict lateral movement. Some of the
techniques that can be used to guard against privilege escalation can also be used to reduce
the chances that an attacker can perform lateral movement. Techniques that you can use to
restrict lateral movement include but are not limited to:

 Code integrity policies


 Network segmentation
 No common accounts or passwords
 Logon script sanitation
 Apply software updates and patches

Code Integrity policies

Code Integrity (CI) policies allow you to restrict which applications and scripts can run on a
computer. There are a variety of methods that you can use to enforce code in Windows
environments including AppLocker policies on pre-Windows 10 and pre-Windows Server
2016 systems or Windows Defender Application Guard and Windows Defender Device
Guard on Windows 10 and Windows Server 2016 systems configured with appropriate
hardware. By restricting which code and scripts can run, you can restrict the toolset attackers
can make use of to perform lateral movement.

Network segmentation
You can restrict lateral movement by segmenting critical workloads onto separate networks
and VLANS and then controlling which traffic can cross those boundaries. Network
segmentation allows you to limit which hosts can communicate with sensitive servers.

For example, you might block traffic from workstations on your organization’s internal
network to servers except on the specific ports required by the workstations. You could also
configure segmentation through firewalls so that you allow a file server to communicate with
workstations on the ports required by file sharing, but not allow communication between the
file servers and workstations on any other port, including those used for administrative
activities such as the ports used by RDP, PowerShell Remoting, or SSH. You can also
segment the network so that sensitive servers can only allow communication using
administrative protocol from a select set of computers that are locked down and configured as
PAWs.

No common accounts or passwords

Organizations should avoid common local accounts being created on systems. This not only
includes disabling the built-in administrator account and instead using a unique alternative,
but also ensuring that a common account isn’t added to multiple systems with the same
credentials. For example, a standard account created across all systems that have a specific
application installed.

Organizations should avoid using a common password for separate accounts. For example,
audits of the security configurations of some organizations have found that even though they
create separate custom accounts to be used for services on separate computers, those custom
accounts are configured with a single common password. A red team or attacker who can
determine that password will have an easier time performing lateral movement than an
attacker or red team in an environment where every password is complex and unique.

Local Administrator Password Solution (LAPS) can be used to ensure that the passwords of
the local administrator accounts on all computers in an Active Directory environment have a
unique password. This allows organizations to avoid the common trap of having a standard
local administrator account password across all computers in the organization.

Logon script sanitation

Logon scripts can often include sensitive information, with some logon scripts even including
passwords in clear text. An attacker that gains access to a computer may have access to any
scripts that run on that computer as those scripts may be accessible over the network using
non-privileged credentials. Logon scripts should not contain sensitive information such as
account passwords. Where possible, logon scripts should be replaced by group policy or
configuration management tools such as System Center Configuration Manager or Microsoft
Intune.

Apply software update and patches

Attackers will use any technique available to perform lateral movement within an
organization. This includes exploiting vulnerabilities in operating systems and applications
that have been patched by the vendor but haven’t yet been patched by the organization.
While exploit code exists for some vulnerabilities before those vulnerabilities are patched by
vendors, exploit code is more commonly available for vulnerabilities after those
vulnerabilities are patched by the vendor. This is partly because security researchers and
attackers can reverse engineer a software update to determine what vulnerability the software
update addresses and are then able to build a tool to exploit that vulnerability, rather than
having to discover the vulnerability through their own research.

Organizations should ensure that operating systems, applications, device drivers and
firmware have all appropriate software updates applied in a timely manner as this will restrict
attackers from using known exploits to perform lateral movement.

Overview

When information systems are properly configured, all attacks, even those that are
unsuccessful, leave some trace that they occurred. Clever attackers will attempt to remove
those traces once they have gained access to a system. If telemetry monitoring is configured
properly within an organization, monitoring systems will alert the blue team to potential
intrusion activity before the attackers have a chance to remove that telemetry from the
compromised systems.

Logging and monitoring

Unless there is a system to record events as they happen on a computer, finding evidence
about how and when something happened will be difficult. Therefore, the collection of
system event telemetry is important to detecting and understanding how an attacker is
infiltrating and compromising a system.

One way of securing event telemetry from deletion by an attacker who has compromised a
system is to move event telemetry off systems to a centralized location as quickly as possible.
Centralizing event logs provides the benefit of placing many data sources in a single location
where events can be correlated. Attackers who compromise a system will also be unable to
remove event log evidence of their activities if those events are recorded on a separate
system.

SIEM systems

SIEM systems perform analysis of event log data as it is generated. SIEM systems can
aggregate data from a variety of sources, correlate that data, and generate events based on
determinations made about that correlated data. SIEM systems can be software that runs on
Windows or Linux server operating systems and are also available as hardware or virtual
appliances. Some SIEM systems provide compliance, retention, and forensic analysis
functionality. They can be used in conjunction with, or as a replacement for, other event log
management systems in an organization.

IDS

An IDS is a software application, hardware or virtual appliance, that monitors an


organization’s information systems for problematic activity or violations of policy. There are
multiple types of IDS including network intrusion detection systems (NIDS) that monitor
networks for suspicious activity or host-based intrusion detection systems (HIDS) that
monitor a specific system. Multiple IDS can report to a central SIEM system. This central
SIEM system would then provide centralized telemetry storage, correlation, analysis, alerts,
and security recommendations based on telemetry data. An intrusion protection system (IPS)
is a special type of IDS that includes functionality that allows for an automated response to
occur when an intrusion is detected.

Attack detection and machine learning

Recognizing the characteristic evidence of an attack in hundreds of thousands, if not millions,


of event log entries spread across a multitude of different event sources is like finding the
proverbial needle in a haystack. An advantage of big data and machine learning is that they
are very good at finding patterns and anomalies that may not have been apparent using older
analysis techniques.

Big data and machine learning techniques allow the characteristic traces of attacks that are
present in an organization’s event logs to be recognized and surfaced. This occurs because
while the characteristics of a single attack may be subtle, when the characteristics of
thousands of attacks are analyzed across tens of thousands of organizations, commonalities
are more easily identified. Cloud services ingest data constantly. This means that the
identifying characteristics of a newly recognized attack will become known to all customers
almost immediately.

Microsoft attack detection products

Microsoft has several products that can be used to detect suspicious activity on an
organization’s information systems based on collection and analysis of telemetry. These
products can be used individually or together depending on the organization’s need. Some of
these products can run locally on an organization’s network and other products use
Microsoft’s cloud infrastructure for management and analytic functionality.

Advanced Threat Analytics

Advanced Threat Analytics (ATA) is a solution that you can deploy in on-premises
environments to detect threats. ATA uses behavioral analytics to determine what constitutes
abnormal behavior on your organization’s network based on its understanding of prior
behavior of security entities. For example, noticing when an account has suspicious sign-on
activity that differs from normal sign-on activity, when an account performs an enumeration
of the membership of sensitive groups, or when a computer appears to be participating in
attacks, such as a golden ticket attack.

For more information on Advanced Threat Analytics, consult the following documentation:
https://www.microsoft.com/en-au/cloud-platform/advanced-threat-analytics

Azure Advanced Threat Protection

Azure Advanced Threat Protection (ATP) has very similar functionality to ATA, except all of
the telemetry is funneled for analysis into the cloud rather than that analysis being performed
on-premises. Similar to ATA, Azure ATP uses behavioral analytics to determine what
constitutes abnormal behavior on your organization’s network based on learning the prior
behavior of security entities. Azure ATP can ingest telemetry data from SIEM systems,
Windows Event Forwarding, directly from Windows Event Collector as well as RADIUS
accounting from VPN endpoints.

For more information on Azure ATP, consult the following documentation:


https://docs.microsoft.com/en-us/azure-advanced-threat-protection/what-is-atp

Azure Security Center

Originally deployed as a tool to analyze and report on the security of resources in Azure,
Azure Security Center agents can be deployed to on-premises servers. Azure Security Center
can analyze event telemetry from servers running both on-premises both bare metal or
virtualized as well as servers running as IaaS virtual machines, correlating events so that
administrators are able to view the timeline of a specific attack as well as steps that can be
taken to mitigate that attack.

For more information on Azure Security Center, consult the following documentation:
https://docs.microsoft.com/en-us/azure/security-center/security-center-intro

Windows Defender Advanced Threat Protection

Windows Defender Advanced Threat Protection is a product for Windows 10 endpoints that
provides the following functionality:

 Endpoint behavioral sensors. Monitors a Windows 10 computer’s telemetry, including


gathering data from event logs, running processes, registry, file, and network
communications data. This data is forwarded to the organization’s Windows Defender
ATP cloud instance.
 Cloud security analytics. Cloud security analytics takes the telemetry gathered at the
endpoint level and analyzes that data, providing threat detections and recommended
responses back to the organization. This analysis occurs against information available
to Microsoft across the Windows ecosystem as well as cloud products such as Office
365 and Azure. For example, Microsoft may learn about and resolve a specific threat
from the telemetry of one set of Windows Defender ATP customers. This insight
allows Windows Defender ATP to make recommendations when the same threat is
detected in the endpoint telemetry of another Windows Defender ATP customer.
 Threat intelligence. Windows Defender ATP doesn’t just rely on telemetry collected
with customer’s consent across the Microsoft ecosystem. Microsoft also has security
researchers and engages with partner organizations to identify attacker tools and
techniques and to raise alerts when evidence of these tools and techniques surfaces in
customer telemetry.

For more information on Windows Defender Advanced Threat Protection, consult the
following web page: https://docs.microsoft.com/en-us/windows/security/threat-
protection/windows-defender-atp/windows-defender-advanced-threat-protection

Office 365 ATP

Office 365 ATP is a service that you can add to an existing Office 365 subscription. Office
365 provides functionality around email messaging and files that are used with an Office 365
subscription, such as those stored in a SharePoint Online or Teams site. Office 365 ATP
provides the following functionality:

 Scan email attachments to find malware


 Scan email messages and office documents to locate malicious web addresses
 Locate spoof email messages
 Determine when an attacker attempts to impersonate your users or organization’s
custom domains

Overview

The CIA Triad is a conceptual model for thinking about the security of information. The triad
is composed of the concepts of Confidentiality, Integrity, and Availability. There are multiple
conceptual models for thinking about information security. Other conceptual models, such as
the OECD’s Guidelines for the Security of Information Systems and Networks have nine
principles, and the NIST’s Engineering Principles for Information Technology Security
model has 33 principles. While there is no single generally agreed upon conceptual model for
describing all aspects of information security, a benefit of the CIA Triad is its simplicity,
which drives easy adoption by both information security workers as well as other
stakeholders within the organization.

Confidentiality

The data pillar of the CIA Triad involves ensuring that data stored within an organization’s
information systems only be accessible to authorized individuals. There can be a variety of
reasons for ensuring the limited availability of data, from the data being a matter of national
and/or regional security, the data being business critical intellectual property, or the data
involving an individual’s personally identifiable information that various regulations specify
must be controlled in a particular manner. The confidentiality pillar of the CIA triad model is
about putting in place the appropriate controls to ensure that the dissemination of information
is limited to the intended audience and remains unavailable to unauthorized persons.

For example, an organization may have an online store. The backend information systems
infrastructure of this online store includes a database that hosts customer account data. This
data includes email address and an associated online store password kept in the form of a
cryptographic hash. A successful attacker might compromise the information systems of the
organization and get access to the email address and password hash pair. Even though the
password was stored as a cryptographic hash, it is often possible, using pre-calculated tables
of such hashes, known as “rainbow tables,” to determine the characters that constitute the
original password.

Alternatively, an attacker might compromise a database that stores customer credit card
information, exfiltrate that data, and then sell it. Another attacker might compromise an
organization’s email system and publicly disclose sensitive internal communications. These
scenarios would constitute a failure of information systems within the confidentiality pillar of
the CIA Triad model as information has become exposed to those that should not have access
to it.

Solutions such as Microsoft’s Azure Information Protection allow organizations to address


the confidentiality pillar of the CIA Triad. Azure Information Protection not only allows
protected files to be accessed by authorized persons but can be used to limit how that access
occurs. For example, blocking sensitive documents from being opened on unrecognized
networks, which minimizes the chance of information leakage.

More Information: You can find more about Azure Information Protection at:
https://docs.microsoft.com/en-us/information-protection/understand-explore/what-is-
information-protection

Integrity

The integrity pillar of the CIA Triad involves ensuring that data retains its veracity over its
lifetime. This means that data isn’t modified or deleted without authorization. It also means
that authorized modifications are tracked as it is possible for an authorized person to make an
unauthorized modification. For example, if an Excel spreadsheet created several years earlier
is subject to legal discovery as part of ongoing litigation, the court will want to ensure that the
spreadsheet hasn’t been modified from its original state. If an organization has put in place
controls to address the integrity pillar of the CIA Triad, they’ll be able to demonstrate to the
court that the document is in the original, unmodified state.

Integrity of data

To address the integrity pillar of the CIA Triad, an organization needs to ensure that data
retains its veracity and hasn’t been subject to unauthorized modification. There are multiple
risks involved if an organization does not address the integrity pillar. For example,
information that is business critical could be modified or deleted, causing the organization
financial damage.

Ensuring the integrity of data isn’t just about ensuring that the data has the appropriate
security permissions applied. While only authorized people should be able to modify data,
it’s also necessary to ensure that it’s possible to detect when authorized people make
unauthorized modifications to data. This can only be done if auditing and change tracking are
implemented.

Integrity of configuration

Today the configuration of information systems is increasingly performed through code


rather than traditional manual methods. Technologies such as Puppet, Chef, and PowerShell
Desired State Configuration transform data describing system and application configuration
into actual system and application configuration.

Ensuring the integrity of configuration data is important because a hypothetical attacker


could, rather than attacking a running system directly, instead attack the code that describes
the configuration of that system. In doing so, they’d be able to indirectly modify the
configuration of the systems, making it simpler to attack those systems. The systems that host
the code that describes the configuration of other systems needs controls in place to ensure
that unauthorized modifications are not made to the system configuration code and that all
authorized modifications are tracked.

Availability
The availability pillar involves ensuring that data is accessible to those that have permission
to access it when they need to have access to it. Not only do the information systems that host
the data need to be functioning in a reliable manner, but so do the security systems that
protect the data, as well as the networking systems that are used to connect to the systems that
host the data. This means that organizations need to ensure that the information systems that
host the data as well as those systems security and infrastructure dependencies are highly
available.

To meet the availability pillar of the CIA triad, data and information systems should remain
available when information systems are under predictable load. For example, there were
substantial problems with the 2016 Australian Census as the Australian Bureau of Statistics
moved from manual to electronic collection of census data. On census evening, the systems
for handling the collection of that data failed, meaning that the agency missed collecting
census data on a substantial percentage of the Australian population on the census date.

To meet the availability pillar of the CIA triad, an organization needs to ensure that data
remains available after information systems become unavailable, either through equipment
failure, data corruption, or natural disaster. This means that organizations also need an
effective disaster recovery strategy, regularly moving backed up data to offsite locations.
Doing this will ensure that the data will remain recoverable if the primary storage site suffers
catastrophic failure or the data itself becomes corrupted either deliberately or inadvertently.

Overview

There are several ongoing preparations that an organization can take to improve their overall
approach to information security. These include:

 Developing a baseline security posture


 Classifying information
 Implementing change tracking and auditing
 Monitoring and reporting

Baseline security posture

A baseline security posture represents an organization’s desired or expected security


configuration. Unfortunately, many organizations haven’t achieved their stated baseline
security posture. For example, organizational policy might dictate that information systems
be kept patched with any security update released by a vendor 30 or more days ago, but many
systems within the organization may not have been updated due to lack of resources.

An organization’s baseline security posture should be measurable where possible and


appropriate. For example, Microsoft provides the Security Compliance Toolkit which can be
run against Windows based systems to assess which controls and settings comply with
recommended baselines and which controls and settings may need to be modified to reach a
compliant state.

More information: You can find out more about the Security Compliance Toolkit at:
https://docs.microsoft.com/en-us/windows/security/threat-protection/security-compliance-
toolkit-10
An organization’s baseline security posture should also include policies around the use of
administrative accounts, privileged access workstations, as well as technologies such as
privileged access management and just enough administration. For example, it is far more
difficult for an attacker to compromise a DNS server if the only way that the DNS server can
be managed is through a JEA endpoint accessible only to privileged access workstations,
which is available to a user only after a successful privileged access management request,
than if the DNS server can be managed from any domain joined workstation in the
organization at any point in time.

An organization’s security posture should also involve policies and configurations that
restrict lateral movement. This would include deploying code integrity policies on servers to
restrict the execution of unauthorized code and scripts, segmenting networks so that sensitive
servers are only accessible to authorized hosts, ensuring that common local accounts and
passwords aren’t used, and that software patches and updates are applied in a timely manner.

The challenge is to balance the organization’s need to perform work tasks with a minimum of
inconvenient steps with the need for security. For example, whether it is necessary for users
to enter a BitLocker PIN each time they start their workstation or whether that requirement
only makes sense on privileged access workstations used by the administrators of information
systems.

The baseline security posture is a work in progress. An organization should always be


looking to improve its baseline security posture and should regularly engage external
penetration testers to run red team exercises to assess the current security configuration for
vulnerabilities.

Information classification

As part of an organization’s approach to preparing and maintaining an effective information


security posture, it is necessary to determine which information needs to be protected and the
level of that protection. Once this determination is made, the information can be
appropriately classified.

The key to the classification process is the realization that not all information stored by the
organization is sensitive. For example, how an organization treats the security of information
related to the company picnic should differ substantially to how the organization treats the
security of information related to the company’s finances. Classification of information
determines which controls will be implemented when it comes to addressing the pillars of the
CIA Triad or any other information security framework.

Organizations are increasingly able to use machine learning technologies to automatically


classify and protect information. For example, products such as Azure Information Protection
allow for the automatic classification and protection of data based on the evolving properties
of that data. If Azure Information Protection is implemented and if a user types a credit card
number into an Excel spreadsheet, the number is recognized as a credit card number, an
appropriate classification is determined by the Azure Information Protection Agent, and an
information protection template will automatically be applied.

While it is possible to manually classify information, automatic information classification


mechanisms, especially those that utilize machine learning, have the advantage of being more
consistent. It’s also possible to automatically reclassify information should a systematic
classification error be uncovered, a task that would be as laborious and potentially error-
prone as manually classifying information.

One of the keys to successfully implementing an information classification schema is to keep


the classification rules relatively simple and to seek feedback from stakeholders within the
organization who are deeply familiar with the properties of the information being classified.
A simple classification scheme is more readily understood. Classification categories can often
easily be extended if the existing schema is found to be too simplistic. Classification schemes
should also be tested with small groups before being used more widely within the
organization.

Once information is appropriately classified, organizations can then apply security controls to
that information based on the classification. For example, information that is labelled
“unclassified” has few security controls applied and information labelled “legally sensitive”
has stricter security controls applied.

Change tracking and auditing

Change tracking and auditing allow you to determine who modified a document, when the
document was modified, and what modifications were made to the document. Implementing
change tracking is important when addressing the integrity pillar of the CIA triad.

Change tracking also often provides organizations with the ability to roll back changes. For
example, should an authorized person make an unauthorized change to a document, those
changes can be detected and rolled back if change tracking is implemented as a security
control. If change tracking is not implemented as a security control, it may be impossible to
determine which unauthorized changes were made unless an in-depth analysis of the
document is performed. This is assuming, of course, that the unauthorized changes are
detected in the first place, something that can be difficult to do without change tracking
unless the changes are blatantly obvious.

Auditing is important as it provides information about which users, both authorized and
unauthorized, may have attempted or gained access to data. Auditing isn’t just limited to data.
As a part of maintaining an effective security posture, organizations should audit all changes
to information system configuration, from roles and features being added and removed
through to changes in security group membership and specific configuration settings.

For example, if auditing of security groups isn’t enabled, it might not be possible to
determine whether a user account has been added to a sensitive group by an authorized
administrator or an attacker who is attempting privilege escalation. If auditing of firewall
configuration isn’t enabled, it might not be obvious that specific ports have been opened to
allow an external user to gain remote access.

Monitoring and reporting

As mentioned in the previous module unless a system is present to record events as they
happen, it is almost impossible to have an accurate picture of what is happening within your
organization’s information system’s infrastructure. Collecting system event telemetry not
only allows you to determine what actions an external intruder might be taking on the
organizational network, but what unauthorized actions an authorized insider might be
performing on the organizational network.

Organizations can use IDS and SIEM systems to collect, aggregate, and analyze system event
telemetry. As security analytics software becomes more capable, it’s become possible for
these systems to alert administrators to abnormal activity. For example, systems might notice
that an authorized insider might only access specific sensitive files outside of office hours, an
activity that might be worthy of further investigation.

In terms of developing a baseline security posture, having an IDS and SIEM system that has
analyzed event telemetry in enough detail to determine what constitutes normal activity
makes it easier for those systems to determine and raise an alert when abnormal activity
occurs

Overview

Organizations should not approach information security in an ad-hoc manner. One way of
ensuring that an organization’s approach to information security is deliberate and planned is
to document rules and procedures as organizational policies.

Organizational policies can provide stakeholders within the organization with clarity about
not only how sensitive information is to be protected, but which people within the
organization are responsible for configuring and maintaining the controls that provide that
protection. If there aren’t policies that specify what responsibilities exist when it comes to the
security of information systems, it’s possible both that some individuals may perform actions
in the service of securing information systems that exceed what would be, on further
consideration, to be acceptable and that other individuals take minimal action to protect
information systems because their responsibility for those systems isn’t clearly delineated.

For example, there have been scenarios where the administrators of security systems have
refused to hand over administrative account passwords to superiors when requested. In some
cases, this is because the administrators of those systems believe that doing so would
compromise the security of those systems. In other cases, it was because the administrator of
those systems had reason to believe that the superior was attempting to circumvent the
security controls of those systems. A clearly defined policy would spell out when such a
request should be honored and when such a request should be refused.

Policies that clearly delineate responsibilities assist organizations that are under attack
because they spell out which individuals in a crisis should perform tasks and how those tasks
should be performed. If responsibilities are not clearly delineated, it’s possible that mistakes
could be made in responding to the attack that lead to more severe consequences.

Policies should be developed in conjunction with all stakeholders. Your organization may be
subject to specific regulations around data breaches. These regulations may determine which
people within an organization are responsible for performing specific actions when a breach
occurs. For example, that the CIO or the organization’s compliance officer must report the
breach to a specific authority within 48 hours of a breach being detected.

Processes
The processes that an organization should follow when maintaining an information security
posture are like those outlined in the Blue Team Kill Chain section of the previous module.
These processes can be categorized as follows:

 Pre-incident process
 Intra-incident process
 Post-incident process

Pre-incident processes

The pre-incident process essentially involves maintaining the organization’s ongoing baseline
security posture. The pre-incident process represents the organization’s default security
stance. This means ensuring that existing organizational policies and procedures are followed
including:

 An effective patch management strategy. Most breaches could have been prevented if
organizations kept their information systems up-to-date with vendor patches and
updates.
 Effective monitoring and alerting. Ensure that the appropriate telemetry is being
generated by information systems and that this telemetry is effectively analyzed for
anomalies that may indicate an intruder’s presence. Only an effectively calibrated
monitoring and alerting system can warn an organization that an incident is occurring.
 Ensuring good administrative practice. Ensuring good administrative practices, such
as only using privileged access workstations, just enough administration, privileged
access management, least privilege, and other techniques of limiting the usage of
administrative rights reduces the chance of an attacker successfully leveraging
privilege escalation should they gain a foothold within the organizational
infrastructure.
 Restricting possibility of lateral movement. Configure information systems so that the
possibility of an intruder moving laterally through those systems is minimized. This
can be accomplished by implementing and maintaining effective code integrity
policies as well as ensuring that networks are segmented so that only necessary
communication between hosts can occur.
 Ensure good data classification and protection practices. Configure automatic
classification mechanisms that apply classification labels to data as that data is
generated and automatic protection systems that secure that data based on the
assigned classification label.
 Performing red team exercises on a regular basis. When not under attack by an
intruder perform regular intrusion drills to ensure that the information security staff
and appropriate stakeholders are well versed in how to react when an intrusion occurs.

Intra-incident process

After considering their information security posture and performing regular red-team
exercises, organizations should develop clear policies and procedures on how to react when
an intruder is detected in organizational information systems. Described in more detail in the
previous module, these include:

 Determine the extent of the compromise. Determine the degree to which the intruder
has infiltrated the organization’s information systems. Perform a detailed and
thorough investigation to ascertain which systems the intruder has compromised, how
those systems were compromised and when those systems were compromised.
 Plan a response. Ensure that the attempt to evict the intruder from the organizational
systems only occurs after the extent to which the intruder has compromised the
organization is determined. Developing and then implementing an effective response
plan gives an intruder less latitude to react than attempting to deal with the intruder on
an ad-hoc basis before the full extent of the intrusion has been determined. In most
cases the intruder has been present on the network for some time before being
detected, so spending extra time determining the extent of the compromise and
planning a response will not impact on the overall severity of the breach.
 Enact the response. Once the plan to respond to the intruder has been developed, it
should be enacted. Enacting the response should not only involve evicting the intruder
from the organizational network but remediating the vulnerabilities that allowed the
intruder to gain access to the network.

Post-incident process

The post-incident process should involve further investigation as to the “how,” “where,”
“when,” and possibly the “why” of the intrusion. It should involve an analysis of what was
lacking in the implementation of the baseline security posture that allowed the intruder to
gain access to the network. Once this analysis has been performed, steps should be taken to
improve the policies and procedures involved to minimize the chance that an intruder will be
successful in future.

Disclosure responsibility

Post breach activity doesn’t stop once the configuration vulnerabilities that were leveraged to
perform the intrusion are remediated. An increasing amount of legislation and regulation
dictates that organizations must inform certain stakeholders if a breach occurs. Not only must
information systems be remediated, but appropriate notifications must be made. For example,
in the United States the Health Insurance Portability and Accountability Act (HIPAA)
requires that affected individuals, the US department of Health & Human Services, and, in
some cases, the media be notified if protected health information may be exposed through a
breach. The European General Data Protection Regulation (GDPR) requires notification of
both the supervisory authority, generally a member state government agency, and affected
data subjects when there is “a breach of security leading to the accidental or unlawful
destruction, loss, alteration, unauthorized disclosure of, or access to, personal data
transmitted, stored or otherwise processed.” This notification must be provided “without
undue delay and, where feasible, not later than 72 hours after having become aware of it.” In
many US jurisdictions, state data breach laws mandate that impacted parties must be notified
if the information exposed could lead to fraud or identity theft.

An increasing part of the information security professional’s role isn’t simply to ensure that
information systems are protected in the most technically competent manner possible, but to
ensure that regulations around protection of data and systems, as well as how an organization
must respond to a breach must be followed. As mentioned earlier, internal organizational
policies should specify which personnel are responsible for specific areas. This not only
includes who is responsible for maintaining the security of specific systems and data, but who
is responsible for crafting the notifications to external parties impacted by the intrusion.

You might also like