0% found this document useful (0 votes)
18 views168 pages

Exam

Uploaded by

bharathkatamneni
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as TXT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views168 pages

Exam

Uploaded by

bharathkatamneni
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as TXT, PDF, TXT or read online on Scribd
You are on page 1/ 168

Introduction

Devices and Connections


Routing
Networks and Topologies
Domain Name System
Internet of Things
Knowledge Check

Close

Network Security Fundamentals

Devices and
Connections

In this lesson, we will discuss how the globe is connected and detail some common
network devices.

The NET
In the 1960s, the U.S. Defense Advanced Research Projects Agency (DARPA) created
ARPANET, the precursor to the modern internet. ARPANET was the first packet-
switched network. A packet-switched network breaks data into small blocks
(packets), transmits each individual packet from node to node toward its
destination, and then reassembles the individual packets in the correct order at
the destination.

How Things Connect


The ARPANET evolved into the internet (often referred to as the network of
networks) because the internet connects multiple local area networks (LAN) to a
worldwide wide area network (WAN) backbone.

Today billions of devices worldwide are connected to the Internet and use the
transport communications protocol/internet protocol (TCP/IP) to communicate with
each over packet-switched networks. Specialized devices and technologies such as
routers, routing protocols, SD-WAN, the domain name system (DNS) and the world wide
web (WWW) facilitate communications between connected devices.

Common Network Devices


The basic operations of computer networks and the internet include common
networking devices.

Click the arrows for more information about the four common network devices and the
icons used in network diagrams are displayed in the image.

Routers

Routers are physical or virtual devices that send data packets to destination
networks along a network path using logical addresses. Routers use various routing
protocols to determine the best path to a destination, based on variables such as
bandwidth, cost, delay, and distance. A wireless router combines the functionality
of a router and a wireless access point (AP) to provide routing between a wired and
wireless network.

Default Gateway

A default gateway is the node in a computer network using the Internet protocol
(IP) suite that serves as the forwarding host (router) to other networks when no
other route specification matches the destination IP address of a packet. The
default gateway's role is a network node that serves as an access point to another
network, often involving not only a change of addressing, but also a different
networking technology.

Access Point

An access point (AP) is a network device that connects to a router or wired network
and transmits a Wi-Fi signal so that wireless devices can connect to a wireless (or
Wi-Fi) network. A wireless repeater rebroadcasts the wireless signal from a
wireless router or AP to extend the range of a Wi-Fi network.

Hub

A hub (or concentrator) is a network device that connects multiple devices such as
desktop computers, laptop docking stations, and printers on a LAN. Network traffic
that is sent to a hub is broadcast out of all ports on the hub, which can create
network congestion and introduces potential security risks. Any device connected to
a Hub can listen and receive unicast and broadcast traffic from all devices
connected to the same Hub. Unicast traffic is traffic sent from one device to
another device. Broadcast traffic is traffic sent from one device to all devices.

Switches

A switch is essentially an intelligent hub that uses physical addresses to forward


data packets to devices on a network. Unlike a hub, a switch is designed to forward
data packets only to the port that corresponds to the destination device. This
transmission method (referred to as micro-segmentation) creates separate network
segments and effectively increases the data transmission rates available on the
individual network segments. Switches transmit data between connected devices more
securely than hubs because of micro-segmentation. A switch can also be used to
implement virtual LANs (VLANs), which logically segregate a network and limit
broadcast domains and collision domains.

2
of
7
Introduction
Routing

Introduction
Devices and Connections
Routing
Networks and Topologies
Domain Name System
Internet of Things
Knowledge Check

Close

Network Security Fundamentals

Routing

In this lesson, we will discuss routing, routing protocols, and detail some factors
used to determine which route is used.

Routed and Routing Protocols


Routed protocols, such as IP, manage packets with routing information that enables
those packets to be transported across networks using routing protocols.

Routing protocols are defined at the Network layer of the OSI model and specify how
routers communicate with one another on a network. Routing protocols can either be
static or dynamic.

The following describes each type of routing protocol.

Static Routing
A static routing protocol requires that routes be created and updated manually on a
router or other network device. If a static route is down, traffic can’t be
automatically rerouted unless an alternate route has been configured. Also, if the
route is congested, traffic can’t be automatically rerouted over the less congested
alternate route. Static routing is practical only in very small networks or for
very limited, special-case routing scenarios (for example, a destination that’s
used as a backup route or is reachable only via a single router). However, static
routing has low bandwidth requirements (routing information isn’t broadcast across
the network) and some built-in security (users can route only to destinations that
are specified in statically defined routes).

Dynamic Routing
A dynamic routing protocol can automatically learn new (or alternate) routes and
determine the best route to a destination. The routing table is updated
periodically with current routing information.

Dynamic Routing Protocol Classifications


Dynamic routing protocols can be classified further as distance vector, link state,
and path vector. A distance-vector protocol makes routing decisions based on two
factors: the distance (hop count or other metric) and vector (the exit router
interface). It periodically informs its peers and/or neighbors of topology changes.

Convergence (the time required for all routers in a network to update their routing
tables with the most current information such as link status changes) can be a
significant problem for distance-vector protocols.
Distance Vector: Routing Information Protocol
Routing Information Protocol (RIP) is an example of a distance-vector routing
protocol that uses hop count as its routing metric. To prevent routing loops, in
which packets effectively get stuck bouncing between various router nodes, RIP
implements a hop limit of 15, which limits the size of networks that RIP can
support. After a data packet crosses 15 router nodes (hops) between a source and a
destination, the destination is considered unreachable. In addition to hop limits,
RIP employs four other mechanisms to prevent routing loops.

Split Horizon
Prevents a router from advertising a route back out through the same interface from
which the route was learned

Triggered Updates
When a change is detected, the update gets sent immediately instead of waiting 30
seconds to send a RIP update.

Route Poisoning
Sets the hop count on a bad route to 16, which effectively advertises the route as
unreachable

Hold Down Timers


Causes a router to start a timer when the router first receives information that a
destination is unreachable. Subsequent updates about that destination will not be
accepted until the timer expires. This timer also helps avoid problems associated
with flapping. Flapping occurs when a route (or interface) repeatedly changes state
(Up, Down, Up, Down) over a short period of time.

Link State
A link-state protocol requires every router to calculate and maintain a complete
map, or routing table, of the entire network. Routers that use a link-state
protocol periodically transmit updates that contain information about adjacent
connections, or link states, to all other routers in the network. Click the tabs
for more information about link-state protocols and a use case.

Compute-Intensive

Considers Numerous Factors

Convergence

Use Case

Path Vector
A path-vector protocol is similar to a distance-vector protocol but without the
scalability issues associated with limited hop counts in distance-vector protocols.
Each routing table entry in a path-vector protocol contains path information that
gets dynamically updated.

BGP
BGP
Providers
Providers

3
of
7
Devices and Connections
Networks and Topologies

Introduction
Devices and Connections
Routing
Networks and Topologies
Domain Name System
Internet of Things
Knowledge Check

Close

Network Security Fundamentals

Networks
and Topologies

In this lesson, we will discuss types of LAN, WAN, and the topologies of those area
networks.

Area Networks and Topologies


Most computer networks are broadly classified as either LANs or WANs.

LANs and WANs


A LAN is a computer network that connects end-user devices such as laptop and
desktop computers, servers, printers, and other devices so that applications,
databases, files, file storage, and other networked resources can be shared among
authorized users on the LAN.

A WAN is a computer network that connects multiple LANs or other WANs across a
relatively large geographic area such as a small city, a region or country, a
global enterprise network, or the entire planet (as is the case for the internet).

Click the icons in the following graphic for more information about LANs and WANs.

Area Networks and Topologies


Most computer networks are broadly classified as either LANs or WANs.

LANs
A LAN is a computer network that connects end-user devices such as laptop and
desktop computers, servers, printers, and other devices so that applications,
databases, files, file storage, and other networked resources can be shared among
authorized users on the LAN. A LAN can be wired, wireless, or a combination of
wired and wireless. Examples of networking equipment commonly used in LANs include
bridges, hubs, repeaters, switches, and wireless APs. Two basic network topologies
(with many variations) are commonly used in LANs are Star topology and Mesh
topology. Other once-popular network topologies such as ring and bus are rarely
found in modern networks.

Star
Each node on the network is directly connected to a switch, hub, or concentrator,
and all data communications must pass through the switch, hub, or concentrator. The
switch, hub, or concentrator can thus become a performance bottleneck or single
point of failure in the network. A star topology is ideal for practically any size
environment and is the most commonly used basic LAN topology.

Mesh
All nodes are interconnected to provide multiple paths to all other resources. A
mesh topology may be used throughout the network or only for the most critical
network components such as routers, switches, and servers to eliminate performance
bottlenecks and single points of failure.

WANs
A WAN is a computer network that connects multiple LANs or other WANs across a
relatively large geographic area such as a small city, a region or country, a
global enterprise network, or the entire planet (as is the case for the internet).

Examples of networking equipment commonly used in WANs include access servers,


firewalls, modems, routers, virtual private network (VPN) gateways, and WAN
switches.

Traditional WANs rely on physical routers to connect remote or branch users to


applications hosted on data centers. Each router has a data plane, which holds the
information, and a control plane, which tells the data where to go. Where data
flows is typically determined by a network engineer or administrator who writes
rules and policies, often manually, for each router on the network. This process
can be time-consuming and prone to error.

SD-WAN
A software-defined WAN (SD-WAN) separates the control and management processes from
the underlying networking hardware, making them available as software that can be
easily configured and deployed. A centralized control plane means network
administrators can write new rules and policies, and then configure and deploy them
across an entire network at once.

SD-WAN Benefits
SD-WAN makes management and direction of traffic across a network easier. SD-WAN
offers many benefits to geographically distributed organizations. Click the tabs
for more information about the benefits SD-WAN offers.

Simplicity
Improved Performance
Reduced Costs
Because each device is centrally managed, with routing based on application
policies, WAN managers can create and update security rules in real time as network
requirements change. The combination of SD-WAN with zero-touch provisioning, which
is a feature that helps automate the deployment and configuration processes, also
helps organizations further reduce the complexity, resources, and operating
expenses required to turn up new sites.

Other Area Networks


In addition to LANs and WANs, many other types of area networks are used for
different purposes. Click the arrows for more information about other area networks
and their purposes.

Campus Area Networks (CANs) and Wireless Campus Area Networks (WCANs)
CANs and WCANs connect multiple buildings in a high-speed network (for example,
across a corporate or university campus).

Metropolitan Area Networks (MANs) and Wireless Metropolitan Area Networks (WMANs)
MANs and WMANs extend networks across a relatively large area, such as a city.

Personal Area Networks (PANs) and Wireless Personal Area Networks (WPANs)
PANs and WPANs connect an individual’s electronic devices such as laptop computers,
smartphones, tablets, virtual personal assistants (for example, Amazon Alexa, Apple
Siri, Google Assistant, and Microsoft Cortana), and wearable technology to each
other or to a larger network.

Value-Added Networks (VANs)


VANs are a type of extranet that allows businesses within an industry to share
information or integrate shared business processes.

Virtual Local-Area Networks (VLANs)


VLANs segment broadcast domains in a LAN, typically into logical groups (such as
business departments). VLANs are created on network switches.

Wireless Local-Area Networks (WLANs)


WLANs, also known as Wi-Fi networks, use wireless APs to connect wireless-enabled
devices to a wired LAN. Wireless wide-area networks (WWANs) extend wireless network
coverage over a large area, such as a region or country, typically using mobile
cellular technology.

Storage Area Networks (SANs)


SANs connect servers to a separate physical storage device (typically a disk
array).

4
of
7
Routing
Domain Name System

Introduction
Devices and Connections
Routing
Networks and Topologies
Domain Name System
Internet of Things
Knowledge Check

Close

Network Security Fundamentals

Domain Name
System

In this lesson, we will discuss how the Domain Name System (DNS) enables internet
addresses, such as www.paloaltonetworks.com, to be translated into routable IP
addresses.

What Is DNS?
Domain Name System is a protocol that translates a user-friendly domain name to an
IP address so that users can access computers, websites, services, or other
resources on the internet or private networks.

DNS is a distributed, hierarchical internet database that maps "Fully Qualified


Domain Names (FQDNs) for computers, services, and other resources such as a website
address (also known as a URL) to IP addresses, similar to how a contact list on a
smartphone maps the names of businesses and individuals to phone numbers. A root
name server is the authoritative name server for a DNS root zone.

DNS

To create a new domain name that will be accessible via the internet, you must
register your unique domain name with a domain name registrar, such as GoDaddy or
Network Solutions. This registration is similar to listing a new phone number in a
phone directory. DNS is critical to the operation of the internet.

Root Name Server

Thirteen root name servers (actually, 13 networks comprising hundreds of root name
servers) are configured worldwide. They are named a.root-servers.net through
m.root-servers.net. DNS servers are typically configured with a root hints file
that contains the names and IP addresses of the root servers.

Video: How DNS Works


Watch the video to see an example scenario of how DNS works when a host on a
network needs to connect to another host.
Elapsed time0:00/Total2:36

DNS Record Types


Click the tabs for more information about each DNS record type.

A or AAAA
CNAME
MX
PTR
SOA
NS
TXT
A (IPv4) or AAAA (IPv6) address maps a domain or subdomain to an IP address or
multiple IP addresses.

5
of
7
Networks and Topologies
Internet of Things

Introduction
Devices and Connections
Routing
Networks and Topologies
Domain Name System
Internet of Things
Knowledge Check

Close

Network Security Fundamentals

Internet of
Things

In this lesson, we will discuss how Palo Alto Networks Internet of Things (IoT)
Security helps visibility, prevention, risk assessment, and enforcement of
policies.

Global Internet Expansion


With over five billion internet users worldwide, which represents well over half
the world’s population, the internet connects businesses, governments, and people
across the globe. Our reliance on the internet will continue to grow, with nearly
30 billion devices and “things” – including autonomous vehicles, household
appliances, wearable technology, and more – connecting to the IoT and nearly nine
billion worldwide smartphone subscriptions that will use a total of 160 exabytes
(EB) of monthly data by 2025.

IoT Connectivity Technologies


IoT connectivity technologies are broadly categorized into five areas: cellular,
satellite, short-range wireless, low-power WAN (LP-WAN) and other wireless WAN
(WWAN), and Identity of Things (IDoT). Click the tabs for more information about
each area.
Cellular
Satellite
Short-Range Wireless
LP-WAN and WWAN
IDoT
2G/2.5G: Due to the low cost of 2G modules, relatively long battery life, and large
installed base of 2G sensors and M2M applications, 2G connectivity remains a
prevalent and viable IoT connectivity option.

3G: IoT devices with 3G modules use either Wideband Code Division Multiple Access
(W-CDMA) or Evolved High Speed Packet Access (HSPA+ and Advanced HSPA+) to achieve
data transfer rates of between 384Kbps and 168Mbps.

4G/Long-Term Evolution (LTE): 4G/LTE networks enable real-time IoT use cases, such
as autonomous vehicles, with 4G LTE Advanced Pro delivering speeds in excess of
3Gbps and less than 2 milliseconds of latency.

5G: 5G cellular technology provides significant enhancements compared to 4G/LTE


networks and is backed by ultra-low latency, massive connectivity and scalability
for IoT devices, more efficient use of licensed spectrum, and network slicing for
application traffic prioritization.
Hybrid IoT Security
According to research conducted by the Palo Alto Networks Unit 42 threat
intelligence team, the general security posture of IoT devices is declining,
leaving organizations vulnerable to new IoT-targeted malware and older attack
techniques that IT teams have long forgotten.

Palo Alto Networks IoT Security


Palo Alto Networks IoT security enables security teams to rapidly identify and
protect all unmanaged IoT devices with a machine learning-based, signature-less
approach. Palo Alto Networks created the industry’s first turnkey IoT security
offering, delivering visibility, prevention, risk assessment, and enforcement in
combination with our ML-powered next-generation firewall. There is no need to
deploy any new network infrastructure or change existing operational processes.

Click the tabs for more information about the issues that Palo Alto Networks IoT
security helps mitigate.

IoT Devices Unencrypted and Unsecured

IoMT Devices Running Outdated Software

Healthcare Orgs Practicing Poor Security Hygiene

IoT-Focused Cyberattacks Target Legacy Protocols

Industrial IoT

Course Summary
Now that you've completed this course, you should be able to:
Describe basic operations of enterprise networks, common networking devices, routed
and routing protocols, network types and topologies, and services such as DNS

6
of
7
Domain Name System
Knowledge Check

Introduction
IP Addressing
Subnetting
TCP/IP and OSI Model
Packet Lifecycle
Data Encapsulation
Knowledge Check

Close

Network Security Fundamentals

IP
Addressing

This lesson describes the basic numbering system, IP addressing, and the structure
of IPV4 and IPV6.

Numbering Systems
You must understand how network systems are addressed before following the path
data takes across internetworks. Physical, logical, and virtual addressing in
computer networks require a basic understanding of decimal (base 10), hexadecimal
(base 16), and binary (base 2) numbering.

Decimal, Hexadecimal, and Binary Notations


Click the tabs for more information regarding each notation.

Decimal (Base 10)

Hexadecimal (Base 16)

Binary (Base 2)

IP Addressing Basics
Data packets are routed over a TCP/IP network using IP addressing information.
IPv4, which is the most widely deployed version of IP, consists of a 32-bit logical
IP address.

Loopback and Private Addresses


Loopback network addresses are used for testing and troubleshooting. Private
addresses are reserved for use in private networks and are not routable on the
internet.

Click the tabs for the address ranges of loopback addresses and private addresses.

Loopback Address Range

Private Address Ranges

Subnet Mask
A subnet mask is a number that hides the network portion of an IPv4 address,
leaving only the host portion of the IP address. The network portion of a subnet
mask is represented by contiguous “on” (1) bits beginning with the most significant
bit.

For example, in the subnet mask 255.255.255.0, the first three octets represent the
network portion and the last octet represents the host portion of an IP address.
The decimal number 255 is represented in binary notation as 11111111. As result,
the equivalent of decimal subnet mask 255.255.255.0 in binary notation would be
11111111.11111111.11111111.0.

The default (or standard) subnet masks for Class A, B, and C networks are as
follows:

Class A
Class A
Class B
Class B
Class C
Class C
IPv4 Structure
The 32-bit address space (four octets) of an IPv4 address limits the total number
of unique public IP addresses to about 4.3 billion. In 2018, the pool of available
IPv4 addresses that can be assigned to organizations was officially depleted. A
small pool of IPv4 addresses was reserved by each regional internet registry to
facilitate the transition to IPv6.

Here is an example of an IPv4 address in dotted decimal notation and binary


notation.

IPv6 Structure
IPv6 addresses, which use a 128-bit hexadecimal address space providing about 3.4 x
10^38 (340 hundred undecillion) unique IP addresses, was created to replace IPv4
when the IPv4 address space was exhausted.

The basic format for an IPv6 address is: xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx


where x represents a hexadecimal digit (0–f).

This is an example of an IPv6 address: 2001:0db8:0000:0000:0008:0800:200c:417a

The image describes what comprises IPv6 addresses.

Simplifying IPv6 Addresses


IPv6 security features are specified in Request for Comments (RFC) 7112 and include
techniques to prevent fragmentation exploits in IPv6 headers and implementation of
Internet Protocol Security at the Network layer of the OSI model.

Click the tabs to see which rules the Internet Engineering Task Force (IETF) has
defined to simplify an IPv6 address.

1. One Hexadecimal Digit


2. Two Colons
3. Mixed Environments
Leading zeros in an individual hextet can be omitted, but each hextet must have at
least one hexadecimal digit, except as noted in the next rule. Application of this
rule to IPv6 address: 2001:0db8:0000:0000:0008:0800:200c:417a yields this result:
2001:db8:0:0:8:800:200c:417a.

NAT
Network address translation (NAT) is a method of mapping an IP address space into
another by modifying network address information in the IP header of packets while
they are in transit across a traffic routing device. The simplest type of NAT
provides a one-to-one translation of IP addresses which is used to allow host
devices configured with a private IP address to send and receive traffic on the
internet.

Click the numbers for details about how the firewall performs a source NAT
function.

3
The table displays the IP addresses and zones before and after the NAT translation.

Before NAT: After NAT:


Source Destination Source Destination
192.168.15.47 203.0.113.38 198.51.100.22 203.0.113.38
Users_Net interenet internet internet
2
of
7
Introduction
Subnetting

Introduction
IP Addressing
Subnetting
TCP/IP and OSI Model
Packet Lifecycle
Data Encapsulation
Knowledge Check

Close

Network Security Fundamentals

Subnetting

This lesson describes subnetting fundamentals, network classes, and formats of


subnet masking.

Introduction to Subnetting
Subnetting is a technique used to divide a large network into smaller, multiple
subnetworks by segmenting an IP address into two parts: the network portion of the
address and the host portion of the address.

Network Classes
Subnetting can be used to limit network traffic or limit the number of devices that
are visible to, or can connect to, each other.

Routers examine IP addresses and subnet values (called masks) to determine the best
forward network path for packets. The subnet mask is a required element in IPv4.

Class A and B Subnets


Class A and Class B IPv4 addresses use smaller mask values and support larger
numbers of nodes than Class C IPv4 addresses for their default address assignments.
Class A networks use a default 8-bit (255.0.0.0) subnet mask, which provides a
total of more than 16 million (2^24 -2) available IPv4 node addresses. Class B
networks use a default 16-bit (255.255.0.0) subnet mask, which provides more than
65 thousands (2^16-2) available IPv4 node addresses. You need to reserve 2
addresses, one for the network address and one for the broadcast address, so that
is the reason why you need to subtract 2 addresses from the total number of node
addresses available for these classful networking examples.

Class C Subnets
For a Class C IPv4 address, there are 254 possible node (or host) addresses (28 or
256 potential addresses, but you lose two addresses for each network: one for the
base network address and the other for the broadcast address). A typical Class C
network uses a default 24-bit subnet mask (255.255.255.0). This subnet mask value
identifies the network portion of an IPv4 address, with the first three octets
being all ones (11111111 in binary notation, 255 in decimal notation). The mask
displays the last octet as zero (00000000 in binary notation). For a Class C IPv4
address with the default subnet mask, the last octet is where the node-specific
values of the IPv4 address are assigned.
For example, in a network with an IPv4 address of 192.168.1.0 and a mask value of
255.255.255.0, the network portion of the address is 192.168.1, and the node
portion of the address or the last 8 bits provide 254 available node addresses (2^8
- 2). Just as in the Class A and C examples, you need to reserve 2 addresses, one
for the network address and one for the broadcast address, so that is the reason
why you need to subtract 2 addresses from the total number of node addresses
available.

CIDR
Classless Inter-Domain Routing (CIDR) is a method for allocating IP addresses and
IP routing that replaces classful IP addressing (for example, Class A, B, and C
networks) with classless IP addressing.

Variable-Length Subnet Masking

Unlike subnetting, which divides an IPv4 address along an arbitrary (default)


classful 8-bit boundary (8 bits for a Class A network, 16 bits for a Class B
network, 24 bits for a Class C network), CIDR allocates address space on any
address bit boundary (known as variable-length subnet masking, or VLSM).

Here an example of variable-length subnet masking.

Supernetting

CIDR is used to reduce the size of routing tables on internet routers by


aggregating multiple contiguous network prefixes (known as supernetting), and it
also helps slow the depletion of public IPv4 addresses.

Here an example of supernetting.

3
of
7
IP Addressing
TCP/IP and OSI Model

Introduction
IP Addressing
Subnetting
TCP/IP and OSI Model
Packet Lifecycle
Data Encapsulation
Knowledge Check

Close
Network Security Fundamentals

TCP/IP and
OSI Model

This lesson describes the functions of physical, logical, and virtual addressing in
networking, IP addressing basics, subnetting fundamentals, OSI and the TCP/IP
models, and the packet lifecycle.

TCP/IP Overview
In cybersecurity, you must understand that applications sending data from one host
computer to another host computer will first segment the data into blocks and will
then forward these data blocks to the TCP/IP stack for transmission.

TCP/IP Protocol Stack


The TCP stack places the block of data into an output buffer on the server and
determines the maximum segment size of individual TCP blocks permitted by the
server operating system. The TCP stack then divides the data blocks into
appropriately sized segments, adds a TCP header, and sends the segment to the IP
stack on the server.

The IP stack adds source and destination IP addresses to the TCP segment and
notifies the server operating system that it has an outgoing message that is ready
to be sent across the network. When the server operating system is ready, the IP
packet is sent to the network adapter, which converts the IP packet to bits and
sends the message across the network.

OSI and TCP/IP Models


The Open Systems Interconnection (OSI) and Transmission Control Protocol/Internet
Protocol (TCP/IP) models define standard protocols for network communication and
interoperability.

Layered Approach
The OSI and TCP/IP models use a layered approach to provide more clarity and
efficiency in different areas.

Clarify Functions
IPv6 addresses consist of 32 hexadecimal numbers grouped into eight hextets of four
hexadecimal digits, separated by a colon.

Reduce Processes
Reduce complex networking processes to simpler sublayers and components.

Promote Interoperability
Promote interoperability through standard interfaces.
Enable Layer Changes
Enable vendors to change individual features at a single layer than rebuild the
entire protocol stack.

Facilitate Troubleshooting
Facilitate troubleshooting by isolating and identifying issues within specific
layers, allowing for targeted analysis and resolution.

OSI Model and TCP/IP Protocol Layers


The OSI model is defined by the International Organization for Standardization and
consists of seven layers. This model is a theoretical model used to logically
describe networking processes.

The TCP/IP protocol was originally developed by the U.S. Department of Defense
(DoD) and actually preceded the OSI model. This model defines actual networking
requirements, for example, for frame construction.

OSI Layers
Click the tabs for more information about the OSI layers.

Application (Layer 7 or L7)

Presentation (Layer 6 or L6)

Session (Layer 5 or L5)

Transport (Layer 4 or L4)

Network (Layer 3 or L3)


Data Link (Layer 2 or L2)

Physical (Layer 1 or L1)

TCP/IP Protocol Layers


The following is more information about the TCP/IP protocol layers.

Application (Layer 4 or L4)


This layer consists of network applications and processes, and it loosely
corresponds to Layers 5 through 7 of the OSI model.

Transport (Layer 3 or L3)


This layer provides end-to-end delivery, and it corresponds to Layer 4 of the OSI
model.

Internet (Layer 2 or L2)


This layer defines the IP datagram and routing, and it corresponds to Layer 3 of
the OSI model.

Network Access (Layer 1 or L1)


This layer also is referred to as the Link layer. It contains routines for
accessing physical networks, and it corresponds to Layers 1 and 2 of the OSI model.

4
of
7
Subnetting
Packet Lifecycle

Introduction
IP Addressing
Subnetting
TCP/IP and OSI Model
Packet Lifecycle
Data Encapsulation
Knowledge Check

Close

Network Security Fundamentals

Packet
Lifecycle

This lesson describes the lifecycle of a packet and also how data an application
sends data across a network.

Circuit Switching vs. Packet Switching


The following describes the differences between circuit switching and packet
switching.

Circuit Switching
In a circuit-switched network, a dedicated physical circuit path is established,
maintained, and terminated between the sender and receiver across a network for
each communications session. Before the development of the internet, most
communications networks, such as telephone company networks, were circuit-switched.

Packet Switching

The internet is a packet-switched network comprising hundreds of millions of


routers and billions of servers and user endpoints. In a packet-switched network,
devices share bandwidth on communications links to transport packets between a
sender and a receiver across a network. This type of network is more resilient to
error and congestion than circuit-switched networks.

Packet Segmentation Workflow


The following describes how an application sends data across the network through
packet segmentation. Click the arrow for more information about the packet
segmentation workflow.

1. Send Block of Data to TCP Stack

An application that needs to send data across the network (for example, from a
server to a client computer) first creates a block of data and sends it to the TCP
stack on the server.

5
of
7
TCP/IP and OSI Model
Data Encapsulation

Introduction
IP Addressing
Subnetting
TCP/IP and OSI Model
Packet Lifecycle
Data Encapsulation
Knowledge Check

Close

Network Security Fundamentals

Data
Encapsulation

This lesson describes how data is encapsulated and flows through the layers of the
OSI model.

How Does Data Encapsulation Work?


Data encapsulation wraps protocol information from the (OSI or TCP/IP) layer
immediately above in the data section of the layer below.

Encapsulation (Sending Host)

In the OSI model and TCP/IP protocol, data is passed from the highest layer (Layer
7 in the OSI model, Layer 4 in the TCP/IP model) downward through each layer to the
lowest layer (Layer 1 in the OSI model and the TCP/IP model). It is then
transmitted across the network medium to the destination node, where it is passed
upward from the lowest layer to the highest layer. Each layer communicates only
with the adjacent layer immediately above and below it. This communication is
achieved through a process known as data encapsulation (or data hiding), which
wraps protocol information from the layer immediately above in the data section of
the layer immediately below.

PDU (Receiving Host)

A PDU describes a unit of data at a particular layer of a protocol. For example, in


the OSI model, a Layer 1 PDU is known as a bit, a Layer 2 PDU is known as a frame,
a Layer 3 PDU is known as a packet, and a Layer 4 PDU is known as a segment or
datagram.

When a client or server application sends data across a network, a header (and
trailer in the case of Layer 2 frames) is added to each data packet from the
adjacent layer below it as the data passes through the protocol stack. On the
receiving end, the headers (and trailers) are removed from each data packet as it
passes through the protocol stack to the receiving application.

Course Summary
Now that you've completed this course, you should be able to:

Describe IP addressing
Describe subnetting
List the TCP/IP and OSI model layers
Detail the lifecycle of a packet
Detail how data is encapsulated

6
of
7
Packet Lifecycle
Knowledge Check

Introduction
Endpoint Security
Malware and Anti-Malware
Golden Image
Firewalls and HIPS
Mobile Device Management
Server Management
Structured Host and Network Troubleshooting
Knowledge Check

Close

Fundamentals of Network Security

Endpoint
Security

In this lesson, we will explore endpoint security challenges and solutions.

Endpoint Security
In 2022, there were more than 11.5 billion internet of things (IoT) devices
worldwide, including machine-to-machine (M2M), wide-area IoT, short-range IoT,
massive-and-critical IoT, and multi-access edge computing (MEC) devices.
Traditional endpoint security encompasses numerous security tools, such as anti-
malware software, personal firewalls, HIPSs, and MDM software.

Endpoint security requires implementation of effective endpoint security best


practices, including patch management and configuration management.

The elements of an endpoint security system include the following:

Endpoint Protection
Advanced malware and script-based attacks can bypass traditional antivirus
solutions with ease and potentially wreak havoc on your business.

Click the tabs for more information about the importance of endpoint protection.

Endpoint Classification

Threat Landscape

Network Firewall Restrictions


2
of
9
Introduction
Malware and Anti-Malware

Introduction
Endpoint Security
Malware and Anti-Malware
Golden Image
Firewalls and HIPS
Mobile Device Management
Server Management
Structured Host and Network Troubleshooting
Knowledge Check

Close

Fundamentals of Network Security

Golden
Image

In this lesson, we will explore the golden image for endpoints in your environment
and how to make them more secure.

Important Terminology
Ransomware
Ransomware is a type of malware that threatens to publish the victim's data or
perpetually block access to it unless a ransom is paid.

Golden Image
Endpoint security begins with a standard (“golden”) image that ensures consistent
configuration of devices across the organization, which includes disabling or
removing operating system features and services that are not needed (“hardening”),
installing current security updates, and installing core applications.

Heuristic-Based
Heuristic-Based
Behavior-Based
Behavior-Based
Growing Security Challenges
In practice, an organization will deploy numerous golden images, to, for example,
support different device types, workgroups or departments, and user types (such as
standard users and power users). Most organizations deploy several security
products to protect their endpoints, including personal firewalls, host-based
intrusion prevention systems (HIPSs), mobile device management (MDM), mobile
application management (MAM), data loss prevention (DLP), and antivirus software.
Nevertheless, cyber breaches continue to increase in frequency, variety, and
sophistication.
Additionally, the numbers and types of endpoints – including mobile and IoT devices
– have grown exponentially and increased the attack surface. New variants of the
Gafgyt, Mirai, and Muhstik botnets, among others, specifically target IoT devices.
Additionally, new search engines, such as Shodan (Shodan.io), can automate the
search for vulnerable internet-connected endpoints. Faced with the rapidly changing
threat landscape, traditional endpoint security solutions and antivirus can no
longer prevent security breaches on the endpoint.

Endpoint Security
Click the tabs for more information about why endpoint security is needed.

Protection from Zero-day Exploits

Limitations Due to Regulations and Laws

Stop Malware, Exploits, and Ransomware Attacks

4
of
9
Malware and Anti-Malware
Firewalls and HIPS

Introduction
Endpoint Security
Malware and Anti-Malware
Golden Image
Firewalls and HIPS
Mobile Device Management
Server Management
Structured Host and Network Troubleshooting
Knowledge Check

Close

Fundamentals of Network Security

Firewalls
and HIPS

In this lesson, we will explore host-based intrusion prevention systems (HIPS).

Firewalls Types
Network firewalls protect an enterprise network against threats from an external
network, such as the internet. HIPS is another approach to endpoint protection that
rely on an agent installed on the endpoint to detect malware.

The following describes the different types of firewalls and HIPS. Here is more
information about the different types of firewalls and HIPS.

Network Firewalls
Most traditional port-based network firewalls do little to protect endpoints inside
the enterprise network from threats that originate from within the network, such as
another device that has been compromised by malware and is propagating throughout
the network.

Host-Based Firewalls
Personal (or host-based) firewalls are commonly installed and configured on laptop
and desktop PCs. Personal firewalls typically operate as Layer 7 (Application layer
- OSI Model) firewalls that allow or block traffic based on an individual (or
group) security policy. Personal firewalls are particularly helpful on laptops used
by remote or traveling users who connect their laptop computers directly to the
internet (for example, over a public Wi-Fi connection).

Also, a personal firewall can control outbound traffic from the endpoint to help
prevent the spread of malware from that endpoint. However, note that disabling or
otherwise bypassing a personal firewall is a common and basic objective in most
advanced malware today.

Operating System Firewalls


Windows Firewall is an example of a personal firewall that is installed as part of
the Windows desktop or mobile operating system. A personal firewall protects only
the endpoint device that it is installed on, but it provides an extra layer of
protection inside the network.

Netfilter, or iptables, is the most popular open source, command line interface-
based Linux firewall. Many system administrators prefer to use it for their server
protection as it operates as the first line of defense for Linux server protection.

Host-Based Intrusion Prevention Systems (HIPS)


A HIPS can be either signature-based or anomaly-based, making it susceptible to the
same issues as other signature- and anomaly-based endpoint protection approaches.

Also, HIPS software often causes significant performance degradation on endpoints.


A recent Palo Alto Networks survey found that 25 percent of respondents indicated
that HIPS solutions “caused significant end user performance impact."

5
of
9
Golden Image
Mobile Device Management

Introduction
Endpoint Security
Malware and Anti-Malware
Golden Image
Firewalls and HIPS
Mobile Device Management
Server Management
Structured Host and Network Troubleshooting
Knowledge Check

Close

Fundamentals of Network Security

Mobile Device
Management

In this lesson, we will explore how mobile device management (MDM) centralizes
management and security of mobile devices.

Important Terminology
Click each tab to read the important terminology in this lesson.

Jailbreaking

Rooting

Mobile Device Management and Security


MDM software provides endpoint security for mobile devices such as smartphones and
tablets.

MDM software provides centralized management and security for mobile devices. Here
is more information about the device protection provided by MDM software.

Data Loss Prevention (DLP)


Data Loss Prevention (DLP)
Policy Enforcement
Policy Enforcement
Malware Protection
Malware Protection
Software Distribution
Software Distribution
Remote Erase/Wipe
Remote Erase/Wipe
Geofencing and Location Services
Geofencing and Location Services
6
of
9
Firewalls and HIPS
Server Management

Introduction
Endpoint Security
Malware and Anti-Malware
Golden Image
Firewalls and HIPS
Mobile Device Management
Server Management
Structured Host and Network Troubleshooting
Knowledge Check

Close

Fundamentals of Network Security

Server
Management

In this lesson, we will explore server and systems administration tasks that secure
your network environment.

Server Management Tasks


Server and system administrators perform a variety of important tasks in a secure
network environment.

Click the tabs for more information about each element of server management.

Server and System Administration

Identity and Access Management

Directory Services

Vulnerability and Patch Management

Configuration Management

7
of
9
Mobile Device Management
Structured Host and Network Troubleshooting

Introduction
Endpoint Security
Malware and Anti-Malware
Golden Image
Firewalls and HIPS
Mobile Device Management
Server Management
Structured Host and Network Troubleshooting
Knowledge Check
Close

Fundamentals of Network Security

Structured Host and


Network Troubleshooting

In this lesson, we will explore structured host and network troubleshooting.

Structured Host and Network Troubleshooting


Network administrators should use a systematic process to troubleshoot network
problems when they occur to restore the network to full production as quickly as
possible without causing new issues or introducing new security vulnerabilities.
Resolving network problems quickly and efficiently is a skill that is highly sought
after in IT.

Network Baseline
A baseline provides quantifiable metrics that are periodically measured with
various network performance monitoring tools, protocol analyzers, and packet
sniffers.

Click the tabs for more information about what comprises baseline metrics and their
importance.

Composition

Importance

Network Documentation
Network documentation should include logical and physical diagrams, application
data flows, change management logs, user and administration manuals, and warranty
and support information. Network baselines and documentation should be updated any
time a significant change to the network occurs and as part of the change
management process of an organization.

Effective troubleshooting methodologies include the steps necessary to diagnose and


fix a problem. Some of these common steps are listed below.

Logical Troubleshooting Using the OSI Model


The OSI model provides a logical model for troubleshooting complex host and network
issues. Depending on the situation, you might use the bottom-up, top-down, or
divide-and-conquer approach when you use the OSI model to guide your
troubleshooting efforts.

In other situations, you might make an educated guess about the source of the issue
and begin investigating at the corresponding layer of the OSI model. You could also
use the substitution method (replacing a bad component with a known good component)
to quickly identify and isolate the cause of the issue.
OSI Model
Click the arrows to see what kind of troubleshooting techniques you use at each
layer.

Physical Layer

When you use a bottom-up approach to diagnose connectivity problems, you begin at
the Physical layer of the OSI model by verifying network connections and device
availability.

For example, a wireless device may have power to the antenna or transceiver
temporarily turned off. A wireless access point may have lost power because a
circuit breaker was tripped offline or a fuse was blown. Similarly, a network cable
connection may be loose, or the cable may be damaged.

Thus, before you begin inspecting service architectures, you should start with the
basics: Confirm physical connectivity.

Data Link Layer

Moving up to the Data Link layer, you verify data link architectures, such as
compatibility with a particular standard or frame type.

Although Ethernet is a predominant LAN network standard, devices that roam (such as
wireless devices) sometimes automatically switch between Wi-Fi, Bluetooth, and
Ethernet networks. Wireless networks usually have specified encryption standards
and keys. Connectivity may be lost because a network device or service has been
restored to a previous setting and the device is not responding to endpoint
requests that are using different settings.

Firewalls and other security policies may also be interfering with connection
requests. You should never disable firewalls, but in a controlled network
environment with proper procedures established, you may find that temporarily
disabling or bypassing a security appliance resolves a connectivity issue. The
remedy then is to properly configure security services to allow the required
connections.

Network Layer

Various connectivity problems may also occur at the Network layer.

Important troubleshooting steps include confirming proper network names and


addresses. Devices may have improperly assigned IP addresses that are causing
routing issues or IP address conflicts on the network. A device may have an
improperly configured IP address because it cannot communicate with a DHCP server
on the network.

Similarly, networks have different identities, such as wireless SSIDs, domain


names, and workgroup names. Another common problem exists when a particular network
has conflicting names or addresses. Issues with DNS name resolvers may be caused by
DNS caching services or connection to the wrong DNS servers. Internet Control
Message Protocol (ICMP) is used for network control and diagnostics at the Network
layer of the OSI model. Commonly used ICMP commands include ping and traceroute.

These two simple but powerful commands (and other ICMP commands and options) are
some of the most commonly used tools for troubleshooting network connectivity
issues. You can run ICMP commands in the command line interface on computers,
servers, routers, switches, and many other networked devices.

Transport Layer

At the Transport layer, communications are more complex. Latency and network
congestion can interfere with communications that depend on timely acknowledgments
and handshakes. Time-to-live (TTL) values sometimes have to be extended in the
network service architecture to allow for slower response times during peak network
traffic hours. Similar congestion problems can occur when new services are added to
an existing network or when a local device triggers a prioritized service, such as
a backup or an antivirus scan.

Session Layer

Session layer settings can also be responsible for dropped network connections. For
example, devices that automatically go into a power standby mode (“sleep”) may have
expired session tokens that fail when the device attempts to resume connectivity.
At the server, failover communications or handshake negotiations with one server
may not translate to other clustered servers. Sessions may have to be restarted.

Presentation Layer

Presentation layer conflicts are often related to changes in encryption keys or


updates to service architectures that are not supported by various client devices.
For example, an older browser may not interoperate with a script or a new encoding
standard.

Application Layer

Application layer network connectivity problems are extremely common. Many


applications may conflict with other apps. Apps also may have caching or corrupted
files that can be remedied only by uninstalling and reinstalling or by updating to
a newer version. Some apps also require persistent connections to update services
or third parties, and network security settings may prevent those connections from
being made.
Common Troubleshooting Problems
Troubleshooting host and network connectivity problems typically starts with
analyzing the scope of the problem and identifying the devices and services that
are affected. To enhance the security of your network, always follow proper
troubleshooting steps, keep accurate records of any changes that you attempt,
document your changes, and publish any remedies so that others can learn from your
troubleshooting activities.

Click the tabs to see common troubleshooting steps for issues that may arise.

Local Hosts
Individual Devices
Shared Services
Anomalies
Problems with local hosts are typically much easier to assess and remedy than
problems that affect a network segment or service.

Network Documentation
Network documentation should include logical and physical diagrams, application
data flows, change management logs, user and administration manuals, and warranty
and support information. Network baselines and documentation should be updated any
time a significant change to the network occurs and as part of the change
management process of an organization.

Effective troubleshooting methodologies include the steps necessary to diagnose and


fix a problem. Some of these common steps are listed below.

1. Discover
1. Discover the problem.

2. Evaluate
2. Evaluate the system configuration against the baseline.

3. Track
3. Track the possible solutions.

4. Execute
4. Execute a plan.

5. Check
5. Check the results.

6. Verify
6. Verify the solution (if unsuccessful, return to step 2; if successful, proceed
to step 7).

7. Deploy
7. Deploy the positive solution.

Course Summary
Now that you've completed this course, you should be able to:
Explain how to explore endpoint and mobile device security using technology such as
personal firewalls, host-based IPS, and management features

8
of
9
Server Management
Knowledge Check

Introduction
Legacy Firewalls
Intrusion Detection and Prevention
Web Content Filters
Virtual Private Networks
Data Loss Prevention
Unified Threat Management
Knowledge Check

Close

Network Security Fundamentals

Legacy
Firewalls

In this lesson, we will discuss the basics of legacy firewalls and the functions
they perform.

Legacy Firewalls
Firewalls have been central to network security since the early days of the
internet. A firewall is a hardware platform or software platform or both that
controls the flow of traffic between a trusted network (such as a corporate LAN)
and an untrusted network (such as the internet).

Packet Filtering Firewalls


First-generation packet filtering (also known as port-based) firewalls have the
following characteristics:

Operation
Operation
Match
Match
Inspection
Inspection

Stateful Packet Inspection Firewalls


Second-generation stateful packet inspection (also known as dynamic packet
filtering) firewalls are fast. However, they are port-based and highly dependent on
the trustworthiness of the two hosts, because individual packets aren’t inspected
after the connection is established.
Stateful packet inspection firewalls operate up to Layer 4 (Transport layer) of the
OSI model and maintain state information about the communication sessions that have
been established between hosts on two different networks. These firewalls inspect
individual packet headers to determine source and destination IP address, protocol
(TCP, UDP, and ICMP), and port number (during session establishment only). The
firewalls compare header information to firewall rules to determine if each session
should be allowed, blocked, or dropped. After a permitted connection is established
between two hosts, the firewall allows traffic to flow between the two hosts
without further inspection of individual packets during the session.

Application Firewalls
Third-generation application firewalls are also known as application-layer
gateways, proxy-based firewalls, and reverse-proxy firewalls. Application firewalls
operate up to Layer 7 (the application layer) of the OSI model and control access
to specific applications and services on the network. These firewalls proxy network
traffic rather than permit direct communication between hosts. Requests are sent
from the originating host to a proxy server, which analyzes the contents of the
data packets and, if the request is permitted, sends a copy of the original data
packets to the destination host.

Application firewalls inspect application-layer traffic, so they can identify and


block specified content, malware, exploits, websites, and applications or services
that use hiding techniques such as encryption and non-standard ports. Proxy servers
can also be used to implement strong user authentication and web application
filtering and to mask the internal network from untrusted networks. However, proxy
servers have a significant negative impact on the overall performance of the
network.

2
of
8
Introduction
Intrusion Detection and Prevention

Introduction
Legacy Firewalls
Intrusion Detection and Prevention
Web Content Filters
Virtual Private Networks
Data Loss Prevention
Unified Threat Management
Knowledge Check

Close

Network Security Fundamentals

Web Content
Filters
In this lesson, we will discuss the basics of web content filters to allow or block
users access.

Web Content Filter Functionality


Web content filters restrict the internet activity of users on a network. Web
content filters match a web address (URL) against a database of websites, which is
typically maintained by the individual security vendor that sells the web content
filters and is provided as a subscription-based service.

Video: Web Content Filters


Web content filters classify websites into broad categories. These categories are
then used to control user access to websites. Watch the video to see instances in
which user activity is allowed and restricted on a corporate network.

Elapsed time0:00/Total1:19

4
of
8
Intrusion Detection and Prevention
Virtual Private Networks

Introduction
Legacy Firewalls
Intrusion Detection and Prevention
Web Content Filters
Virtual Private Networks
Data Loss Prevention
Unified Threat Management
Knowledge Check

Close

Network Security Fundamentals

Virtual Private
Networks

In this lesson, we will discuss the basics of virtual private networks (VPNs).

VPNs
A VPN creates a secure, encrypted connection (or tunnel) across the internet
between two endpoints. A client VPN establishes a secure connection between a user
and an organization's network. A site-to-site VPN establishes a secure connection
between two organizations' networks, usually geographically separated.

VPN client software is typically installed on mobile endpoints, such as laptop


computers and smartphones, to extend a network beyond the physical boundaries of
the organization.

The VPN client connects to a VPN server, such as a firewall, router, or VPN
appliance (or concentrator). After a VPN tunnel is established, a remote user can
access network resources, such as file servers, printers, and Voice over IP (VoIP)
phones, as if they were physically in the office.

Composition
The following are the composition of VPNs:

Point-to-Point Tunneling Protocol (PPTP)


PPTP is a basic VPN protocol that uses TCP port 1723 to establish communication
with the VPN peer. PPTP then creates a Generic Routing Encapsulation (GRE) tunnel
that transports encapsulated Point-to-Point Protocol (PPP) packets between the VPN
peers.

Easy Setup
PPTP is easy to set up and fast. However, PPTP is perhaps the least secure VPN
protocol, so it is now seldom used.

Use Cases
PPTP is commonly used with Password Authentication Protocol (PAP), Challenge-
Handshake Authentication Protocol (CHAP), or Microsoft CHAP versions 1 and 2 (MS-
CHAP v1/v2), all of which have well-known security vulnerabilities, to authenticate
tunneled PPP traffic.

Secure
Extensible Authentication Protocol Transport Layer Security (EAP-TLS) is a more
secure authentication protocol for PPTP. However, EAP-TLS requires a public key
infrastructure (PKI) and is therefore more difficult to set up.

Internet Protocol Security (IPsec)


IPsec is a secure communications protocol that authenticates and encrypts IP
packets in a communication session. An IPsec VPN requires compatible VPN client
software to be installed on the endpoint device. A group password or key is
required for configuration. Client-server IPsec VPNs typically require user action
to initiate the connection, such as launching the client software and logging in
with a username and password.

Click the tabs for more information about configuring and establishing secure
communication with IPsec VPNs.

Security Association (SA)

Internet Traffic
Split Tunneling

Secure Sockets Layer (SSL)


SSL is an asymmetric/symmetric encryption protocol that secures communication
sessions. SSL has been superseded by TLS, although SSL is still the more commonly
used terminology.

Deployment
An SSL VPN can be deployed as an agent-based or agentless browser-based connection.

An agentless SSL VPN requires only that users launch a web browser, use HTTPS to
open a VPN portal or webpage, and log in to the network with their user
credentials.

An agent-based SSL VPN connection creates a secure tunnel between a SSL VPN client
installed on a host computer/laptop and a VPN concentrator device in an
organization's network. Agent-based SSL VPNs are often used to securely connect
remote users to an organization's network.

Use Case
SSL VPN technology is the standard method of connecting remote endpoint devices
back to the enterprise network. IPsec is most commonly used in site-to-site or
device-to-device VPN connections, such as connecting a branch office network to a
headquarters network or data center.

5
of
8
Web Content Filters
Data Loss Prevention

Introduction
Legacy Firewalls
Intrusion Detection and Prevention
Web Content Filters
Virtual Private Networks
Data Loss Prevention
Unified Threat Management
Knowledge Check

Close

Network Security Fundamentals

Data Loss
Prevention

In this lesson, we will discuss the basics of Data Loss Prevention (DLP).
DLP
DLP solutions inspect data that is leaving, or egressing, a network, such as data
that is sent via email and/or file transfer. DLP prevents sensitive data (based on
defined policies) from leaving the network.

Purpose and Functionality


Click the tabs for more information about the purpose and functionality of DLP.

Sensitive Data
Data Patterns
Vulnerabilities
A DLP security solution prevents sensitive data from being transmitted outside the
network by a user, either inadvertently or maliciously.

Sensitive data includes the following:

Personally identifiable information (PII) such as names, addresses, birthdates,


Social Security numbers, health records (including electronic medical records, or
EMRs, and electronic health records, or EHRs), and financial data (such as bank
account numbers and credit card numbers)
Classified materials (such as military or national security information)
Intellectual property, trade secrets, and other confidential or proprietary company
information
6
of
8
Virtual Private Networks
Unified Threat Management

Introduction
Legacy Firewalls
Intrusion Detection and Prevention
Web Content Filters
Virtual Private Networks
Data Loss Prevention
Unified Threat Management
Knowledge Check

Close

Network Security Fundamentals

Unified Threat
Management

In this lesson, we will discuss the basics of unified threat management (UTM).

The UTM Appliance


UTM combines multiple cybersecurity functions into one appliance. The UTM appliance
sequentially executes these cybersecurity functions to examine traffic which adds
to network traffic latency.

Security Functions
Many organizations have replaced UTM appliances with next-generation firewalls
(NGFWs) to reduce traffic inspection latency. The Palo Alto Networks next-
generation firewall uses a single pass parallel processing architecture to quickly
inspect all traffic crossing the firewall's dataplane.

Click the arrow for more information about combined security functions and some
typical disadvantages of UTM.

Combined Security Functions

UTM devices combine numerous security functions into a single appliance, including
anti-malware, anti-spam, content filtering, DLP, firewall (stateful inspection),
IDS/IPS, and VPN.

Course Summary
Now that you've completed this course, you should be able to:

Describe network security technologies such as packet filtering, stateful


inspection, application firewalls, and IDS and IPS and web content filters tunnel

7
of
8
Data Loss Prevention
Knowledge Check

Introduction
Prevention-First Architecture
Next-Generation Firewalls
Identification
Next-Generation Firewall Deployment
IronSkillet
Expedition Migration Tool
Best Practice Assessment (BPA)
Zero Trust
Subscription Services
Panorama
Knowledge Check

Close

Network Security Fundamentals

Prevention-First
Architecture
The networking infrastructure of an enterprise can be extraordinarily complex. The
Palo Alto Networks prevention-first security architecture secures enterprises'
perimeter networks, data centers, cloud-native applications, software as a service
(SaaS) applications, branch offices, and remote users with a fully integrated and
automated platform that simplifies security.

Simplified Security Posture


Simplifying your security posture allows you to reduce operational costs and
infrastructure while increasing your ability to prevent threats to your
organization.

Next-Generation Firewall
The Palo Alto Networks Next-Generation Firewall is the foundation of our product
portfolio. The firewall is available in physical, virtual, and cloud-delivered
deployment options, and it provides consistent protection wherever your data and
apps reside.

Subscription Services
Subscription services add enhanced threat services and next-generation firewall
capabilities, including DNS Security, URL Filtering, Threat Prevention, and
WildFire malware prevention.

Panorama
Panorama provides centralized network security management. It simplifies
administration while delivering comprehensive controls and deep visibility into
network-wide traffic and security threats.

2
of
12
Introduction
Next-Generation Firewalls

Introduction
Prevention-First Architecture
Next-Generation Firewalls
Identification
Next-Generation Firewall Deployment
IronSkillet
Expedition Migration Tool
Best Practice Assessment (BPA)
Zero Trust
Subscription Services
Panorama
Knowledge Check

Close

Network Security Fundamentals


Next-Generation
Firewalls

The current threat landscape exposes weaknesses in traditional port-based network


firewalls. End users want access to applications operating across a wide range of
device types, often with little regard for security risks. Meanwhile, data center
expansion, network segmentation, virtualization, and mobility initiatives are
forcing organizations to rethink how to enable access to applications and data
while still protecting their networks from advanced threats.

The Architecture
The Palo Alto Networks Next-Generation Firewall is the core of our product
portfolio. The firewall inspects all traffic, including applications, threats, and
content, and associates it with the user, regardless of location or device type.
The application, content, and user become integral components of the enterprise
security policy.

Deployment

Organizations deploy next-generation firewalls at the network perimeter and inside


the network at logical trust boundaries. All traffic crossing the firewall
undergoes a full-stack, single-pass inspection, which provides the complete context
of the application, associated content, and user identity. With this level of
context, you can align security with your key business initiatives.

Zero Trust Architecture

The next-generation firewall functions as a segmentation gateway in a Zero Trust


architecture. By creating a micro-perimeter, the firewall ensures that only known,
allowed traffic and legitimate applications have access to the protected surface
area.

Next-generation firewalls include many capabilities that enable complete visibility


of application traffic flows, including user identity and content. The firewall
protects against known attacks, unknown attacks, and advanced persistent threats.

Click the tabs for more information about different elements of the next-generation
firewall architecture.

Single-Pass Architecture

Single Stream-Based Engine

3
of
12
Prevention-First Architecture
Identification

Introduction
Prevention-First Architecture
Next-Generation Firewalls
Identification
Next-Generation Firewall Deployment
IronSkillet
Expedition Migration Tool
Best Practice Assessment (BPA)
Zero Trust
Subscription Services
Panorama
Knowledge Check

Close

Network Security Fundamentals

Identification

The foundational element of our enterprise security platform is Identification. We


use multiple identification techniques to determine the exact identity of
applications, users, and content traversing your network. These techniques expose
who is using what on your network.

What Is Identity Access Management - IAM?


IAM or Identity and Access Management is a framework that helps organizations
manage access to their resources. IAM can be used to control who has access to
what, when they have access, and how they can access it.

IAM Terms and Concepts


Identity: An identity is a unique identifier that represents a user, group, or
application.
Access control: Access control is the process of granting or denying access to
resources.
Authentication: Authentication is the process of verifying the identity of a user.
Authorization: Authorization is the process of granting access to resources based
on the user's identity and permissions.
Reporting: Reporting is the process of generating reports on user activity.
Compliance: Compliance is the process of ensuring that an organization meets all
applicable regulations.
Risk management: Risk management is the process of identifying, assessing, and
mitigating risks

IAM Goals
Click the icons for more information about the goals of IAM.

Compliance

Principle of Least Priivlege

Protect Data and Systems

Authentication
How do you prevent an attack if an attacker has stolen user credentials? A common
prevention method is to configure multi-factor authentication. Multi-factor
authentication relies on two concepts: something the user knows (such as their
username and password) and something the user has (such as a security key,
smartphone, or multi-factor authentication application running on their laptop).

Single Factor Authentication


The firewall and Panorama can use external servers to control administrative access
to the web interface and end user access to services or applications through
Captive Portal and GlobalProtect. In this context, any authentication service that
is not local to the firewall or

Panorama is considered external, regardless of whether the service is internal


(such as Kerberos) or external (such as a SAML identity provider) relative to your
network.

Multi-Factor Authentication
When multi-factor authentication is enabled, a user or an attacker must present two
or more forms of user credentials, called factors, to gain access to a network
resource. The first factor commonly is a username and password. The additional
factors often are some type of numerical code that is generated on a mobile phone
app or on a dedicated security key fob, or by software installed on the user’s
laptop or desktop system.

Authentication Policy
An Authentication policy enables an administrator to selectively issue multi-factor
authentication challenges based on the sensitivity of the information stored on the
network resource. A firewall administrator also can configure the number and
strength of the factors of authentication based on the sensitivity of the
information on each network resource. For example, you could require all corporate
users to authenticate using multi-factor authentication once a day but require IT
administrators to use multi-factor authentication each time they use Remote Desktop
Protocol (RDP) to access an Active Directory server.

Role-Based Access Control


Role-based access control (RBAC) is an access control model that restricts access
to resources based on the roles that users hold within an organization. In RBAC,
users are assigned to roles, and roles are granted access to resources. This allows
administrators to manage access to resources by simply managing the roles that
users belong to.

When you create a custom role, use the “least privilege” approach to grant user
access. Restrict interfaces available per user and capabilities within each
interface of the web interface.

Other Types of Access Control


Attribute-based access control (ABAC) is a way to provide and manage user access to
IT

services to support areas that require more contextual awareness than simple user-
focused

parameters as an assigned role.

Discretionary Access Control (DAC) The app owner has complete control over who can
access a particular service. An application can be a file, directory, or any other,
which can be
accessed via the network. Can grant permission to other users to access the app.

Mandatory Access Control (MAC) Is a restrictive type of access control. In MAC,


access to resources is controlled by a security policy that is enforced by the
operating system. MAC is more secure than DAC, but it is also more difficult to
implement and manage.

User Profile
User and group information must be directly integrated into the technology
platforms that secure modern organizations. Knowing who is using the applications
on your network, and who may have transmitted a threat or is transferring files,
strengthens security policies and reduces incident response times. User-ID, a
standard feature on Palo Alto Networks next-generation firewalls, enables you to
leverage user information stored in a wide range of repositories.

Let's look at some of the benefits of user profiles:


Visibility into a User’s Application Activity Visibility into the application
activity at a user level, not just an IP address level, allows you to more
effectively monitor and control the applications traversing the network. You can
align application usage with business requirements and, if appropriate, inform
users that they are in violation of policy, or even block their application usage
outright.

User-Based Policy Control Policies can be defined to safely enable applications


based on users or groups of users in either outbound or inbound directions. For
example, user-based policy control can allow only the IT department to use tools
such as SSH, telnet, and FTP on standard ports. With User-ID, policy follows the
users no matter where they go – headquarters, branch office, or at home – and
whatever device they may use.

User-Based Analysis, Reporting, and Forensics Informative reports on user


activities can be generated using any one of the pre-defined reports or by creating
a custom report.

Neutralizing Credential Theft Palo Alto Networks Certified Cybersecurity Entry


Level Technician (PCCET) 117 User-ID integrates with identity and authentication
frameworks, which enables precise access control through policy-based multi-factor
authentication. These controls disrupt the use of stolen credentials.
App-ID
App-ID, or application identification, accurately identifies applications
regardless of port, protocol, evasive techniques, or encryption. It provides
application visibility and granular, policy-based control.

Port-based stateful packet inspection technology was created more than 25 years ago
to control applications using ports and IP addresses. Using port-based stateful
inspection to identify applications depends on an application strictly adhering to
its assigned port(s). This presents a problem because applications can easily be
configured to use any port. As a result, many of today’s applications cannot be
identified, much less controlled, by the port-based firewall, and no amount of
“after the fact” traffic classification by firewall “helpers” can solve the
problems associated with port-based application identification.
App-ID Architecture
Palo Alto Networks App-ID technology does not rely on a single element, such as a
port or protocol. Instead, App-ID uses multiple mechanisms to determine what the
application is. The application identity then becomes the basis for the firewall
policy that is applied to the session. App-ID is highly extensible, and application
detection mechanisms can be added or updated to keep pace with the ever-changing
application landscape.

App-ID Advantages
Click the tabs to see the advantages of using App-ID.

Granular Control

Visibility

Positive Enforcement

User-ID
The next-generation firewall accurately identifies users for policy control.

A key component of security policies based on application use is identifying the


users who should be able to use those applications. IP addresses are ineffective
identifiers of users or server roles within the network. With the User-ID and
Dynamic Address Group (DAG) features, you can dynamically associate an IP address
with a user or server role in the data center. You can then define user- and role-
based security policies that adapt dynamically to changing environments.

User-ID Architecture
In environments that support multiple types of end users across a variety of
locations and access technologies, it is unrealistic to guarantee physical
segmentation of each type of user. Visibility into the application activity at a
user level, not just at an IP address level, allows you to more effectively enable
the applications traversing the network. You can define both inbound and outbound
policies to safely enable applications based on users or groups of users.

Click the card to see examples of user-based policies.

User-ID Advantages
Creating and managing security policies based on the application and user identity
protects the network more effectively than relying solely on port and IP address
information. User-ID enables organizations to leverage user information stored in a
wide range of repositories.
Click the tabs for more information about the advantages of using User-ID.

Visibility
Policy Control
Logging and Reporting
Improved visibility into application usage based on user and group information can
help organizations maintain a more accurate picture of network activity.

Content-ID
Content identification controls traffic based on complete analysis of all allowed
traffic. It uses multiple threat prevention and data loss prevention techniques in
a single-pass architecture that fully integrates all security functions.

Enterprise networks are facing a rapidly evolving threat landscape full of modern
applications, exploits, malware, and attack strategies that can evade traditional
detection methods. To avoid detection, attackers use applications that dynamically
hop ports, use non-standard ports, tunnel within other applications, or hide within
proxies, SSL encryption, or other types of encryption.

These evasive techniques prevent traffic inspection by traditional security


solutions using IPS and port-based firewalls, enabling threats to easily and
repeatedly flow across the network. Additionally, attackers can use customized
malware to avoid detection and mitigation by traditional signature-based anti-
malware solutions.

Content-ID Techniques
Content-ID infuses next-generation firewalls with capabilities not possible in
legacy, port-based firewalls. App-ID eliminates threat vectors through the tight
control of all types of applications. This capability immediately reduces the
attack surface of the network, after which all allowed traffic is analyzed for
exploits, malware, dangerous URLs, and dangerous or restricted files or content.
Content-ID then goes beyond stopping known threats to proactively identify and
control unknown malware, which is often used as the leading edge of sophisticated
network attacks.

Click the arrows for more information about the different techniques Content-ID
uses.

Application Decoders

Content-ID leverages the next-generation firewall's existing App-ID application and


protocol decoders to look for threats hidden within application data streams. This
ability enables the firewall to detect and prevent threats tunneled within approved
applications that would bypass traditional IPS or proxy solutions.

Uniform Threat Signature Format

Rather than use a separate set of scanning engines and signatures for each type of
threat, Content-ID leverages a uniform threat engine and signature format to detect
and block a wide range of malware C2 activity and vulnerability exploits in a
single pass.
Vulnerability Attack Protection (IPS)

Robust routines for traffic normalization and defragmentation, boosted by protocol-


anomaly, behavior-anomaly, and heuristic detection mechanisms, provide protection
from the widest range of both known and unknown threats.

Cloud-Based Intelligence

For unknown content, WildFire provides rapid analysis and a verdict that the
firewall can leverage.

SSL Decryption

More and more web traffic connections are encrypted with SSL by default, which can
provide some protection to end users—but SSL also can provide attackers with an
encrypted channel to deliver exploits and malware. Palo Alto Networks ensures
visibility by giving security organizations the flexibility to, by policy,
granularly look inside SSL traffic based on application or URL category.

Control of Circumventing Technologies

Attackers and malware have increasingly turned to proxies, anonymizers, and a


variety of encrypted proxies to hide from traditional network security products.
Palo Alto Networks provides the ability to tightly control these technologies and
limit them to approved users, while blocking unapproved communications that could
be used by attackers.

4
of
12
Next-Generation Firewalls
Next-Generation Firewall Deployment

Introduction
Prevention-First Architecture
Next-Generation Firewalls
Identification
Next-Generation Firewall Deployment
IronSkillet
Expedition Migration Tool
Best Practice Assessment (BPA)
Zero Trust
Subscription Services
Panorama
Knowledge Check

Close

Network Security Fundamentals

Next-Generation
Firewall Deployment

The full range of Palo Alto Networks physical Next-Generation Firewalls is easy to
deploy into your organization’s network.

NGFW Deployment
Physical Appliances Firewalls (PA-Series)
The Palo Alto Networks family of next-generation firewalls includes physical
appliances, virtualized firewalls, and 5G-ready firewalls. The firewalls are
purposefully designed for simplicity, automation, and integration. PA-Series
firewalls support a variety of data center and remote branch deployment use cases.

Click the tabs for more information about each PA-Series firewall.

PA-7000 Series
The PA-7000 Series M-Powered NGFWs ensure top-notch security for high-speed data
centers and service providers. These ML-powered systems deliver dependable
performance, robust threat prevention, and high-throughput decryption capabilities.

PA-5450 Series
The cutting-edge PA-5450 Series next-generation firewall is crafted to fulfill the
demanding necessities of hyperscale data centers, internet edges, and campus
segmentation implementations. The PA-5450 boasts remarkable performance, providing
150Gbps of threat protection with security services activated.

PA-5400 Series
The advanced PA-5400 Series effectively halts both known and zero-day attacks
across all network traffic, including encrypted data. These potent ML-Powered NGFWs
are ideally suited for securing high-speed internet edge, data center, and
extensive campus segmentation scenarios.

PA-3400 Series
The state-of-the-art PA-3400 Series boasts impressive performance in a compact 1RU
design. As an energy-efficient ML-powered NGFW, it serves as the preferred firewall
for internet edge and campus settings.

PA-1400 Series
The cutting-edge PA-1400 Series is perfect for safeguarding expansive branch
locations and smaller enterprise campuses. It supports Power over Ethernet (PoE),
virtual systems (VSYS), high-speed 5G copper ports (mGig ports), and fiber ports,
making it an ideal choice for comprehensive protection.

PA-400 Series
The advanced PA-400 Series offers inline, real-time threat protection for
enterprise branches. With its compact design, this fourth-generation series
delivers enterprise-level security that is easy to implement. These ML-powered
NGFWs effectively prevent both known and unknown threats in real time while swiftly
decrypting branch traffic.

PA-220R
The 220R is a durable ML-Powered NGFW designed to provide strong security in
challenging conditions. Common applications include utility substations, power
plants, manufacturing facilities, oil and gas installations, and building
management systems.

Virtualized Firewalls (VM-Series)


VM-Series virtual firewalls provide all the capabilities of Palo Alto Networks
next-generation physical hardware firewalls (PA-Series) in a virtual machine form
factor. VM-Series form factors support a variety of deployment use cases.

Micro-Segmentation
VM-Series virtual firewalls reduce your environment’s attack surface by enabling
granular segmentation and micro-segmentation. Threat prevention capabilities ensure
that when threats do enter the environment, they are quickly identified and stopped
before they can exfiltrate data, deliver malware or ransomware payloads, or cause
other damage.

Multicloud and Hybrid Cloud


VM-Series virtual firewalls eliminate the need for multiple security solutions by
providing comprehensive visibility and control across multicloud and hybrid cloud
environments, including Amazon Web Services (AWS), Google Cloud Platform (GCP),
Microsoft Azure, and Oracle Cloud. VM-Series firewalls can also be deployed in
software-defined networks and virtualized environments, all managed from a single
console.

DevOps and CI/CD Pipeline


VM-Series virtual firewalls provide on-demand, elastic scalability to ensure
security when and where you need it most. With automated network security, security
provisioning can now be integrated directly into DevOps workflows and CI/CD
pipelines without slowing the pace of business.

CN-Series Container Firewall


The CN-Series container firewall provides threat protection for inbound, outbound,
and east-west traffic between container trust zones and other workload types
without slowing the speed of development.

Click the graphics for more information about the CN-Series container firewall.

Standard next-generation firewalls play an indispensable role in securing on-


premises deployments — few data centers can do without them. However, cloud-native
environments pose unique challenges that next-generation firewalls were not
designed to handle, especially when it comes to looking inside a Kubernetes
environment.

Challenges with Kubernetes Environment


In Kubernetes, pods (collections of containers) run on nodes, either physical or
virtual machines. Developers rarely deal with nodes explicitly, but nodes impact
how firewalls operate. Next-generation firewalls cannot determine which pod is the
source of outbound traffic because all source IP addresses are translated to the
node IP address. To a traditional firewall, all outbound traffic from the node
looks the same.

Due to the use of network address translation (NAT) in Kubernetes, all outbound
traffic carries the node source IP address. While Kubernetes creates challenges for
traditional security tools, it also presents opportunities to enhance security by
taking advantage of native constructs—most notably, namespaces. Kubernetes
namespaces help to simplify cluster management by making it easier to apply certain
policies to some parts of the cluster without affecting others. However, they are
also a valuable security tool. Security teams use namespaces to isolate workloads,
which reduces the risk of attacks spreading within a cluster and establish resource
quotas to mitigate the damage that can be caused by a successful cluster breach.

CN-Series Container Firewall Solution


A secure cloud-native architecture requires the ability to secure traffic that
crosses namespace boundaries or travels outbound to legacy workloads such as bare
metal servers. However, doing so requires knowing the internal state of objects
such as namespaces, pods, and containers. Because that information is not available
outside the environment.

Palo Alto Networks CN-Series next-generation firewalls deploy as two sets of pods:
one for the management plane (CN-MGMT), and another for the firewall dataplane (CN-
NGFW). The management pod always runs as a Kubernetes service. The dataplane pods
can be deployed in two modes: distributed or clustered. Click the tabs for more
information about distributed and clustered mode.

Distributed Mode

Clustered Mode

K2-Series Firewalls
The K2-Series firewalls are 5G-ready next-generation firewalls designed to prevent
successful cyberattacks from targeting mobile network services. The K2-Series
firewalls are designed to handle growing throughput needs due to increased
application,user, and device-generated data.

K2-Series Advantages
To tap into 5G business opportunities with minimal risk of exploitation by bad
actors, you need complete visibility and automated security across all network
locations.

Click the tabs for more information about the advantages of K2-Series firewalls.

Scalable
Secure and Fast
You can deploy K2-Series firewalls on all 5G network interfaces to achieve
scalable, complete protection with consistent management and full application
visibility. The shift in 5G network architectures creates more intrusion points,
including attacks inside mobile tunnels and threats within apps traversing cellular
traffic. Mobile operators need consistent security enforcement across all network
locations and all signaling traffic. This larger attack surface increases the need
for application-aware Layer 7 security to detect known and unknown threats.

5
of
12
Identification
IronSkillet

Introduction
Prevention-First Architecture
Next-Generation Firewalls
Identification
Next-Generation Firewall Deployment
IronSkillet
Expedition Migration Tool
Best Practice Assessment (BPA)
Zero Trust
Subscription Services
Panorama
Knowledge Check

Close

Network Security Fundamentals

IronSkillet

IronSkillet is a set of day-one, next-generation firewall configuration templates


for PAN-OS that are based on security best practice recommendations.

IronSkillet
IronSkillet provides extensive how-to documentation and templates that provide an
easy-to-implement configuration model that is use-case agnostic. The templates
emphasize key security elements, such as dynamic updates, security profiles, rules,
and logging that should be consistent across deployments.

IronSkillet Benefits
Palo Alto Networks best practice documentation shares our expertise in security
prevention with customers and partners, helping them improve their security posture
across various scenarios. IronSkillet templates play a complementary role by
compiling best practice recommendations into prebuilt, day-one configurations that
can be readily loaded into Panorama or a next-generation firewall. Benefits of
using IronSkillet templates include:

6
of
12
Next-Generation Firewall Deployment
Expedition Migration Tool

Introduction
Prevention-First Architecture
Next-Generation Firewalls
Identification
Next-Generation Firewall Deployment
IronSkillet
Expedition Migration Tool
Best Practice Assessment (BPA)
Zero Trust
Subscription Services
Panorama
Knowledge Check

Close

Network Security Fundamentals

Expedition
Migration Tool

Palo Alto Networks Expedition migration tool enables organizations to analyze their
existing environment, convert existing Security policies to Palo Alto Networks
Next-Generation Firewalls, and assist with the transition from proof of concept to
production.

Expedition Migration Tool Functionality


The migration to a Palo Alto Networks Next-Generation Firewall is a critical step
toward the prevention and detection of cyberattacks. Today’s advanced threats
require moving away from port-based firewall policies, which are no longer adequate
to protect against a modern threat landscape, into an architecture that reduces
your attack surface by safely enabling only those applications that are critical to
your organization and eliminating applications that introduce risk.

We use our tools, expertise, and best practices to help organizations analyze their
existing environment and migrate policies and firewall settings to the next-
generation firewall, and we assist in all phases of the transition.

Click the arrows to see the primary functions of Expedition.

Third-Party Migration
Third-party migration transfers the various firewall rules, addresses, and service
objects to a PAN-OS XML configuration file that can be imported into a Palo Alto
Networks next-generation firewall. Third-party migration from the following
firewall vendors is available:

Cisco ASA/PIX/FWSM
Check Point
Fortinet
McAfee Sidewinder
Juniper SRX/NetScreen
7
of
12
IronSkillet
Best Practice Assessment (BPA)

Introduction
Prevention-First Architecture
Next-Generation Firewalls
Identification
Next-Generation Firewall Deployment
IronSkillet
Expedition Migration Tool
Best Practice Assessment (BPA)
Zero Trust
Subscription Services
Panorama
Knowledge Check

Close

Network Security Fundamentals

Best Practice
Assessment

The Palo Alto Networks Best Practice Assessment (BPA) is a free tool used to
quickly identify the most critical security controls for an organization to focus
on.

Parts of BPA
Most organizations don’t fully implement the capabilities of their next-generation
firewalls, leading to gaps in security.

The BPA consists of the following three parts:

Best Practice Assessment


The Best Practice Assessment is a focused evaluation of your adoption of security
configuration best practices for Next-Generation Firewalls or Panorama network
security management, grouped by policies, objects, networks, and devices.

Security Policy Capability Adoption Heatmap


The Security Policy Capability Adoption Heatmap shows gaps in your capability
adoption, displaying your current adoption percentage rating for each metric as
well as a comparison against industry averages. With deep insight into how you are
leveraging prevention capabilities, you can continuously improve your security.

BPA Executive Summary


The BPA Executive Summary is designed for management and executives to better
understand the current state of security capability adoption at a glance—including
information on progress from prior reports, if available—to help your organization
confidently progress toward best practice implementation.

8
of
12
Expedition Migration Tool
Zero Trust

Introduction
Prevention-First Architecture
Next-Generation Firewalls
Identification
Next-Generation Firewall Deployment
IronSkillet
Expedition Migration Tool
Best Practice Assessment (BPA)
Zero Trust
Subscription Services
Panorama
Knowledge Check

Close

Network Security Fundamentals

Zero
Trust

The Zero Trust network is interconnected with your existing network to take
advantage of the technology you already have. Then, over time, you iteratively move
your additional datasets, applications, assets, or services from your legacy
network to your Zero Trust network.

Implementing Zero Trust


The phased approach helps make deploying Zero Trust networks manageable, cost-
effective, and nondisruptive.

Video: Implementing Zero Trust


Twentieth-century design paradigms can create problems when designing a twenty-
first-century Zero Trust network. However, building Zero Trust networks is actually
much simpler than building legacy twentieth-century hierarchical networks. Watch
the video to see the five-step methodology of a Zero Trust deployment.

Elapsed time0:00/Total0:00

9
of
12
Best Practice Assessment (BPA)
Subscription Services

Introduction
Prevention-First Architecture
Next-Generation Firewalls
Identification
Next-Generation Firewall Deployment
IronSkillet
Expedition Migration Tool
Best Practice Assessment (BPA)
Zero Trust
Subscription Services
Panorama
Knowledge Check

Close

Network Security Fundamentals


Subscription
Services

Large organizations are saddled with too many point solutions and services, each
designed to secure against one specific threat vector. With our subscription
services, you can confidently secure all traffic that traverses any of their
networks or clouds and automatically share intelligence across the organization.

Types of Subscription Services


You need to activate the following licenses so your next-generation firewall can
gain complete visibility and apply full threat prevention on your network.

IoT Security Services


The IoT Security solution works with next-generation firewalls to dynamically
discover and maintain a real-time inventory of the IoT devices on your network.
Through AI and machine-learning algorithms, the IoT Security solution achieves a
high level of accuracy, even classifying IoT device types encountered for the first
time. And because it’s dynamic, your IoT device inventory is always up to date. IoT
Security also provides the automatic generation of policy recommendations to
control IoT device traffic, as well as the automatic creation of IoT device
attributes for use in firewall policies.

SD-WAN Service
SD-WAN provides intelligent and dynamic path selection on top of the industry-
leading security that PAN-OS software already delivers. Managed by Panorama, the
SD-WAN implementation includes:

DNS Security Service


The Palo Alto Networks DNS Security service applies predictive analytics to disrupt
attacks that use DNS for C2 or data theft. Tight integration with Palo Alto
Networks next-generation firewalls gives you automated protection and eliminates
the need for independent tools.

The following describes how threats hidden in DNS traffic are identified and the
importance of cloud-based protections.

Identifying DNS traffic

Threats hidden in DNS traffic are rapidly identified with shared threat
intelligence and machine learning.

Cloud-based protections

Cloud-based protections scale infinitely and are always up to date, giving your
organization a critical new control point to stop attacks that use DNS.

URL Filtering Service


To complement the next-generation firewall's threat prevention and application
control capabilities, a fully integrated, on-box URL Filtering database enables
security teams to control end-user web surfing activities and combine URL context
with application and user rules. The URL Filtering service complements App-ID by
enabling you to configure the next-generation firewall to identify and control
access to websites and to protect your organization from websites hosting malware
and phishing pages. You can use the URL category as a match criterion in policies,
which permits exception-based behavior and granular policy enforcement. For
example, you can deny access to malware and hacking sites for all users, but allow
access to users who belong to the IT Security group.

Click the tabs for more information about PAN-DB and user-credential detection.

PAN-DB

User-Credential Detection

Advanced URL Filtering Service


Advanced URL Filtering uses a cloud-based ML-powered web security engine to perform
ML-based inspection of web traffic in real-time. This reduces reliance on URL
databases and out-of-band web crawling to detect and prevent advanced, file-less
web-based attacks including targeted phishing, web-delivered malware and exploits,
command-and-control, social engineering, and other types of web attacks.

Threat Prevention Service


Threat Prevention blocks known malware, exploits, and C2 activity on the network.
Adding the Threat Prevention subscription brings additional capabilities to your
next-generation firewall that identify and prevent known threats hidden within
allowed applications. The Threat Prevention subscription includes
malware/antivirus, C2, and vulnerability protection.

Advanced Threat Prevention Service


In addition to all of the features included with Threat Prevention, the Advanced
Threat Prevention subscription provides an inline cloud-based threat detection and
prevention engine, leveraging deep learning models trained on high fidelity threat
intelligence gathered by Palo Alto Networks, to defend your network from evasive
and unknown command-and-control (C2) threats by inspecting all network traffic.

WildFire Overview
The WildFire cloud-based malware analysis environment is a cyberthreat prevention
service that identifies unknown malware, zero-day exploits, and advanced persistent
threats (APTs) through static and dynamic analysis in a scalable, virtual
environment.

Updated Protections
Updated Protections
WildFire automatically disseminates updated protections in near real time to
immediately prevent threats from spreading–without manual intervention.
Subscription Service
Subscription Service
Basic WildFire support is included as part of the Threat Prevention license. The
WildFire subscription service provides enhanced services for organizations that
require immediate coverage for threats. The subscription service includes WildFire
analysis for advanced file types (APK, PDF, Microsoft Office, and Java Applet) and
provides the ability to upload these files for cloud sandbox analysis using the
WildFire API.

WildFire Verdicts
As part of the next-generation firewall’s inline threat prevention capability, the
firewall performs a hash calculation for each unknown file and then submits the
hash to WildFire.

If any WildFire subscriber has seen the file before, then the existing verdict for
that file is immediately returned. Links from inspected emails are also submitted
to WildFire for analysis.

WildFire Analysis
If WildFire has never seen the file, the next-generation firewall is instructed to
submit the file for analysis. If the file size is under the configured size limit,
the firewall securely transmits the file to WildFire. Firewalls with an active
WildFire license perform scheduled auto-updates to their WildFire signatures, with
update checks configured as often as every minute. Click the tabs for more
information about WildFire analyses.

Machine Learning-Based
Improved Security Posture and Protection
Cloud-Based
WildFire leverages inline machine learning malware and phishing prevention
techniques, such as real-time WildFire verdict and anti-malware dynamic
classification, to determine if the corresponding webpages for email links
submitted to the service contain any exploits, malware, or phishing capabilities.
WildFire considers the behaviors and properties of the website when making a
verdict on the link.

Cortex XSOAR TIM


Provides a graphical analysis of firewall traffic logs and identifies potential
risks to your network using threat intelligence from the Threat Intel Management
portal. With an active license, you can also open a threat intel search based on
logs recorded on the firewall.

Cortex Data Lake


Provides cloud-based, centralized log storage and aggregation. The Cortex Data Lake
is required or highly-recommended to support several other cloud-delivered
services, including Cortex XDR, IoT Security, and Prisma Access, and Traps
management service.

GlobalProtect Gateway
Provides mobility solutions and/or large-scale VPN capabilities. By default, you
can deploy GlobalProtect portals and gateways (without HIP checks) without a
license. If you want to use advanced GlobalProtect features (HIP checks and related
content updates, the GlobalProtect Mobile App, IPv6 connections, or a GlobalProtect
Clientless VPN) you will need a GlobalProtect Gateway license for each gateway.

Virtual Systems
This is a perpetual license and is required to enable support for multiple virtual
systems on PA-3200 Series firewalls. In addition, you must purchase a Virtual
Systems license if you want to increase the number of virtual systems beyond the
base number provided by default on PA-5200 Series, and PA-7000 Series firewalls
(the base number varies by platform). The PA-800 Series, PA-220, and VM-Series
firewalls do not support virtual systems.

Enterprise Data Loss Prevention (DLP)


Provides cloud-based protection against unauthorized access, misuse, extraction,
and sharing of sensitive information. Enterprise DLP provides a single engine for
accurate detection and consistent policy enforcement for sensitive data at rest and
in motion using machine learning-based data classification, hundreds of data
patterns using regular expression.

SaaS Security Inline


The SaaS Security solution works with Cortex Data Lake to discover all of the SaaS
applications in use on your network. SaaS Security Inline can discover thousands of
Shadow IT applications and their users and usage details. SaaS Security Inline also
enforces SaaS policy rule recommendations seamlessly across your existing Palo Alto
Networks firewalls. App-ID Cloud Engine (ACE) also requires SaaS Security Inline.

10
of
12
Zero Trust
Panorama

Introduction
Prevention-First Architecture
Next-Generation Firewalls
Identification
Next-Generation Firewall Deployment
IronSkillet
Expedition Migration Tool
Best Practice Assessment (BPA)
Zero Trust
Subscription Services
Panorama
Knowledge Check

Close

Network Security Fundamentals

Panorama

Panorama enables you to manage all key features of Palo Alto Networks Next-
Generation Firewalls by using a model that provides central oversight and local
control.
Panorama Management
Advantages of Panorama
The time it takes to deploy changes across firewalls can be costly, both in
employee time and possible project delays. In addition, errors can increase when
network and security engineers program changes firewall by firewall.

Click the tabs for more information about how Panorama reduces security management
complexity and simplifies network security management.

Reduces Security Management Complexity


Simplifies Network Security Management
Panorama reduces security management complexity with consolidated policy creation
and centralized management features. The Application Command Center (ACC) in
Panorama provides a customizable dashboard for setup and control of Palo Alto
Networks Next-Generation Firewalls, with an efficient rulebase and actionable
insight into network-wide traffic and threats.

Deployment Modes
Three deployment mode options are available for Panorama: Panorama mode, management
only mode, and log collector mode. Separating management and log collection modes
enables Panorama to scale to meet organizational and geographical requirements.
Being able to choose both form factor and deployment mode gives you maximum
flexibility for managing Palo Alto Networks Next-Generation Firewalls in a
distributed network.

Panorama Mode
Panorama mode controls both policy and log management functions for all managed
devices.

Management Only and Log Collector Mode


In management only mode, Panorama manages configurations for managed devices but
does not collect or manage logs.

In log collector mode, one or more log collectors collect and manage logs from
managed devices. This assumes that another deployment of Panorama is operating in
management only mode.

Templates and Template Stacks


Panorama provides tools for centralized administration that can reduce time and
errors in firewall management. These tools allow you to manage common device and
network configurations through templates, define common building blocks for device
and network configuration within templates, and use variables within templates to
manage specific devices. Click the image to enlarge it.

Click the tabs for more information about the tools Panorama provides through
templates and template stacks.

Template Stacks

Individual Devices

Templates

Hierarchical Device Groups


Panorama manages common policies and objects through hierarchical device groups.
Panorama uses multilevel device groups to centrally manage policies across all
deployment locations with common requirements.

For example, device groups may be determined geographically, such as Europe and
North America. Also, each device group can have a functional subdevice group (for
example, perimeter or data center subdevice groups).

Click the image to enlarge it.

Click the tabs for more information about elements in a hierarchical device group.

Pre-Rules and Post-Rules

Pre-Rules and Post-Rules

Local Rules (Individual Devices)

Role-Based Administration

Evaluation Order

Selective Log Forwarding

Dynamic Logging and Reporting


Panorama uses powerful monitoring and reporting tools available at the local device
management level. As you perform log queries and generate reports, Panorama
dynamically pulls the most current data directly from next-generation firewalls
under management or from logs forwarded to Panorama.

Course Summary
Now that you've completed this course, you should be able to:

Describe how to properly secure enterprise networks through PAN-OS deployment


templates and migration options and DNS, URL Filtering, Threat Prevention, and
WildFire subscription services

11
of
12
Subscription Services
Knowledge Check

Introduction
Understanding the Modern Cybersecurity Landscape
Attacker Profiles and Cyberattack Lifecycle
Knowledge Check

Close

Cybersecurity Fundamentals

Understanding the Modern Cybersecurity Landscape

The modern cybersecurity landscape is a rapidly evolving hostile environment with


advanced threats and increasingly sophisticated threat actors. This lesson
describes the current cybersecurity landscape, explains SaaS application
challenges, and describes various security and data protection regulations and
standards.

Modern Computing Trends


The nature of enterprise computing has changed dramatically over the past decade.

Introduction to Web 2.0 and Web 2.0 Applications


Core business applications are now commonly installed alongside Web 2.0 apps on a
variety of endpoints. Networks that were originally designed to share files and
printers are now used to collect massive volumes of data, exchange real-time
information, transact online business, and enable global collaboration. Many Web
2.0 apps are available as software-as-a-service (SaaS), web-based, or mobile apps
that can be easily installed by end users or that can be run without installing any
local programs or services on the endpoint. The use of Web 2.0 apps in the
enterprise is sometimes referred to as Enterprise 2.0. Many organizations are
recognizing significant benefits from the use of Enterprise 2.0 applications and
technologies, including better collaboration, increased knowledge sharing, and
reduced expenses. Click the arrows for more information about common Web 2.0 apps
and services (many of which are also SaaS apps).

File Sync and Sharing Services


File sync and sharing services are used to manage, distribute, and provide access
to online content, such as documents, images, music, software, and video. Examples
include Apple iCloud, Box, Dropbox, Google Drive, Microsoft OneDrive, Spotify, and
YouTube.

Instant Messaging (IM)


IM is used to exchange short messages in real time. Examples include Facebook
Messenger, Skype, Snapchat, and WhatsApp.

Microblogging
Microblogging web services allow a subscriber to broadcast short messages to other
subscribers. Examples include Tumblr and Twitter.

Office Productivity Suites


Office productivity suites consist of cloud-based word processing, spreadsheet, and
presentation software. Examples include Google Apps and Microsoft Office 365.
Remote Access Software
Remote access software is used for remote sharing and control of an endpoint,
typically for collaboration or troubleshooting. Examples include LogMeIn and
TeamViewer.

Remote Team Meeting Software


Remote team meeting software is used for audio conferencing, video conferencing,
and screen sharing. Examples include Adobe Connect, Microsoft Teams, and Zoom.

Social Curation
Social curation shares collaborative content about particular topics. Social
bookmarking is a type of social curation. Examples include Cogenz, Instagram,
Pinterest, and Reddit.

Social Networks
Social networks are used to share content with business or personal contacts.
Examples include Facebook, Instagram, and LinkedIn.

Web-Based Email
Web-based email is an internet email service that is typically accessed via a web
browser. Examples include Gmail, Outlook.com, and Yahoo! Mail.

Wikis
Wikis enable users to contribute, collaborate, and edit site content. Examples
include Socialtext and Wikipedia.

Web 3.0
The vision of Web 3.0 is to return the power of the internet to individual users,
in much the same way that the original Web 1.0 was envisioned. To some extent, Web
2.0 has become shaped and characterized, if not controlled, by governments and
large corporations dictating the content that is made available to individuals and
raising many concerns about individual security, privacy, and liberty.

AI and Machine Learning


AI and Machine Learning
AI and machine learning are two related technologies that enable systems to
understand and act on information in much the same way that a human might use
information. AI acquires and applies knowledge to find the most optimal solution,
decision, or course of action. Machine learning is a subset of AI that applies
algorithms to large datasets to discover common patterns in the data that can then
be used to improve the performance of the system.

Blockchain
Blockchain
Blockchain is essentially a data structure containing transactional records (stored
as blocks) that ensures security and transparency through a vast, decentralized
peer-to-peer network with no single controlling authority. Cryptocurrency, such as
Bitcoin, is an example of a blockchain application.

Data Mining
Data Mining
Data mining enables patterns to be discovered in large datasets by using machine
learning, statistical analysis, and database technologies.

Mixed Reality
Mixed Reality
Mixed reality includes technologies, such as virtual reality (VR), augmented
reality (AR), and extended reality (XR), that deliver an immersive and interactive
physical and digital sensory experience in real time.
Natural Language Search
Natural Language Search
Natural language search is the ability to understand human spoken language and
context (rather than a Boolean search, for example) to find information.

Managed Security Services


The global shortage of cybersecurity professionals – estimated by the International
Information System Security Certification Consortium (ISC) squared to be 3.4
million in 2023. – is leading many organizations to partner with third-party
security services organizations. These managed security service providers (MSSPs)
typically operate a fully staffed 24/7 security operations centers (SOCs) and offer
a variety of services such as log collection and aggregation in a security
information and event management (SIEM) platform, event detection and alerting,
vulnerability scanning and patch management, threat intelligence, and incident
response and forensic investigation, among others.

Work-from-Home (WFH) and Work-from-Anywhere (WFA)


In the wake of the global pandemic, many organizations have implemented remote
working models that include WFH and WFA. In many cases, these organizations have
realized additional the benefits from these models, including increased operational
efficiencies, higher employee productivity and morale, and greater access to a
diverse talent pool that extends far beyond the immediate geographical region of
the organization. “Ericsson Mobility Report, November 2021.” Ericsson. Accessed
January 16, 2022.

New Application Threat Vectors


Exploiting vulnerabilities in core business applications has long been a
predominant attack vector, but threat actors are constantly developing new tactics,
techniques, and procedures (TTPs).

Protect Networks and Cloud Environments


To effectively protect their networks and cloud environments, enterprise security
teams must manage the risks associated with a relatively limited, known set of core
applications, as well as the risks associated with an ever-increasing number of
known and unknown cloud-based applications. The cloud-based application consumption
model has revolutionized the way organizations do business, and applications such
as Microsoft Office 365 and Salesforce are being consumed and updated entirely in
the cloud.

Application Classification
Many applications are designed to circumvent traditional port-based firewalls, so
that they can be easily installed and accessed on any device, anywhere and anytime.
Click the arrow for more information about how applications are classified and how
difficult it has become to classify applications.

Allowing and Blocking Applications

Classifying applications as either “good” (allowed) or “bad” (blocked) in a clear


and consistent manner has also become increasingly difficult. Many applications are
clearly good (low risk, high reward) or clearly bad (high risk, low reward), but
most are somewhere in between depending on how the application is being used.

Tactics, Techniques, and Procedures (TTPs)


The following are the different types of TTPs:
Port Hopping
Port hopping allows adversaries to randomly change ports and protocols during a
session.

Using Non-Standard Ports


An example of using non-standard ports is running Yahoo! Messenger over TCP port 80
(HTTP) instead of the standard TCP port for Yahoo! Messenger (5050).

Tunneling
Another method is tunneling within commonly used services, such as running peer-to-
peer (P2P) file sharing or an IM client such as Meebo over HTTP.

Hiding Within SSL Encryption


Hiding in SSL encryption masks the application traffic, for example, over TCP port
443 (HTTPS). More than half of all web traffic is now encrypted.

Turbulence in the Cloud


Cloud computing technologies help organizations evolve their data centers from a
hardware-centric architecture to a dynamic and automated environment. Cloud
environments pool computing resources for on-demand support of application
workloads that can be accessed anywhere, anytime, and from any device.

Public and Private Cloud Environments


Many organizations have been forced into significant compromises regarding their
public and private cloud environments. Organizations can trade function,
visibility, and security for simplicity, efficiency, and agility. If an application
hosted in the cloud isn’t available or responsive, network security controls are
typically “streamlined” out of the cloud design.

Cloud Security Trade-Offs

Service Models
There are three cloud computing service models: Software as a Service (Saas),
Platform as a Service (PaaS), and Infrastructure as a Service (IaaS).

Click the tabs for more information about the cloud computing models.

Software as a Service (SaaS)


Platform as a Service (PaaS)
Infrastructure as a Service (IaaS)
In a SaaS model, the capability provided to the consumer is to use the provider’s
applications running on a cloud infrastructure. The applications are accessible
from various client devices through either a thin-client interface, such as a web
browser (e.g., web-based email), or a program interface. The consumer does not
manage or control the underlying cloud infrastructure, including the network,
servers, operating systems (OSs), storage, or even individual application
capabilities, with the possible exception of limited user-specific application
configuration settings.

Examples of SaaS providers:

Google
FutureFuel
Microsoft
Squibler
Slack
Boast Captial
Zoom
Lumen5
Zendesk
Shopify
Dropbox
HubSpot
Mailchimp
Adobe

SaaS, PaaS, and IaaS Use Cases


Palo Alto Networks initiated this unified strategy of XDR. Now the cybersecurity
industry is following.

SaaS
SaaS cloud service is hosted by the CSP and available to consumers through a pay-
as-you-go model.

SaaS cloud service is hosted by the service provider, which is available to the
consumers based on the pay-as-you-go. SaaS cloud service is suitable for consumers
across different localities. The software license is open on either monthly or
yearly subscriptions, and it can be accessed via a browser and internet
connectivity. The primary function of SaaS product development is to provide cloud-
based apps to consumers.

Dropbox, which lets users share and download files over the network, and Google
Docs, which lets users create and share documents over the web, are perfect
examples of SaaS cloud services.

Below are examples of SaaS use cases. Click the image to enlarge it.

Below are examples of business apps that use SaaS.

SaaS
SaaS
PaaS
PaaS is perfect for software developers.

The benefit of PaaS is that it is compatible with different languages for


programming, and it has full control to create custom software. However, PaaS is
not as flexible as IaaS.

The main function of PaaS is to give a useful framework for developers to manage
new product apps, build apps, and test apps. Developers find it easy to use PaaS
because it serves the database, application tools, and OS required for app
development at the same time, saving time and resources.

Many developers love PaaS because it gives them a platform to build apps that can
be provided as a SaaS solution. The best example of PaaS is the Google App Engine,
which facilitates easy app creation and hosting.
Below are examples of PaaS use cases. Click the image to enlarge it.

IaaS
The primary function of IaaS is to provide visual data centers to businesses.

This cloud service is suitable for IT administrators. IaaS offers a full


infrastructure, the server, and the storage space where new technologies and
experiments are conducted over the cloud. With IaaS, you can perform data mining
analysis, host a website and software solution, and create virtual data centers for
large-scale enterprises.

In IaaS, the vendor works on networking resources, storage space management, and
the dedicated data center, while the business works on specified tools for
development, hosted app management, and OS deployment.

Amazon Web Services is an excellent example of IaaS. Netflix and Salesforce are
moving toward Amazon Web Services to support their ever-growing customer bases.

Below are examples of IaaS use cases. Click the image to enlarge it.

Hypergrowth of SaaS Applications


Organizations have become increasingly dependent on a host of mission-critical
collaboration applications like Slack, Microsoft Teams, Zoom, Jira, and Confluence.
Today, these collaboration apps are driving business agility because they keep
employees connected anywhere they are—all day, every day. However, these
collaboration apps have created a fundamentally different way of conducting
business because messages are now shorter and more frequent.

Risk Associated with SaaS Applications


SaaS applications can be harmful and create new risks if they are not properly
secured.

Click the tabs for more information about each risk associated with SaaS
applications.

Exposure of Confidential Information

Cloud-Based Threats

Unsanctioned SaaS Applications

Lack of Visibility and Control

SaaS Application Risks


The average employee uses at least eight applications. As employees add and use
more SaaS apps that connect to the corporate network, the risk of sensitive data
being stolen, exposed, or compromised increases. It is important to consider the
security of the apps, what data they have access to, and how employees are using
them.

Introduction to SaaS
Data is located everywhere in today’s enterprise networks, including in many
locations that are not under the organization’s control. New data security
challenges emerge for organizations that permit SaaS use in their networks. With
SaaS applications, data is often stored where the application resides – in the
cloud. Thus, the data is no longer under the organization’s control, and visibility
is often lost. SaaS vendors do their best to protect the data in their
applications, but it is ultimately not their responsibility. Just as in any other
part of the network, the IT team is responsible for protecting and controlling the
data, regardless of its location.

SaaS Security Challenges


Because of the nature of SaaS applications, their use is very difficult to control
– or have visibility into – after the data leaves the network perimeter. This lack
of control presents a significant security challenge: End users are now acting as
their own “shadow” IT department, with control over the SaaS applications they use
and how they use them. Click the arrows for more information about the inherent
data exposure and threat insertion risks of SaaS.

Malicious Outsiders

The most common source of breaches for networks overall is also a critical concern
for SaaS security. The SaaS application becomes a new threat vector and
distribution point for malware used by external adversaries. Some malware will even
target the SaaS applications themselves, for example, by changing their shares to
“public” so that the data can be retrieved by anyone.

Malicious Insiders

The least common but real SaaS application risk is the internal user who
maliciously shares data for theft or revenge purposes. For example, an employee who
is leaving the company might set a folder’s share permissions to “public” or share
it with an external email address to later steal the data from a remote location.

Accidental Data Exposure

Well-intentioned end users are often untrained and unaware of the risks their
actions pose in SaaS environments. Because SaaS applications are designed to
facilitate easy sharing, it’s understandable that data often becomes
unintentionally exposed. Accidental data exposure by end users is surprisingly
common and includes accidental share, promiscuous share, and ghost share.

Accidental Share

An accidental share happens when a share meant for a particular person is


accidentally sent to the wrong person or group. Accidental shares are common when a
name autofills or is mistyped, which may cause an old email address, the wrong name
or group, or even an external user to have access to the share.
Promiscuous Share

In a promiscuous share, a legitimate share is created for a user, but that user
then shares with other people who shouldn’t have access. Promiscuous shares often
result in the data being publicly shared. These types of shares can go well beyond
the control of the original owner.

Ghost (or Stale) Share

In a ghost share, the share remains active for an employee or vendor that is no
longer working with the company or should no longer have access. Without visibility
and control of the shares, tracking and fixing of shares to ensure that they are
still valid is very difficult.

Compliance Challenges
Most companies and industries face constant data regulatory and compliance
challenges. Compliance and security are not the same thing. Let's review some of
the compliance challenges.

Change and Complicity


Many laws and regulations are obsolete or ambiguous and are not uniformly supported
by international communities. Laws are constantly changing. Some regulations may
also be inconsistent with other applicable laws and regulations, thus requiring
legal interpretation to determine relevance, intent, or precedence. As a result,
businesses and organizations in every industry struggle to achieve and maintain
data compliance.

Compliance and Security


An organization can be fully compliant with all applicable cybersecurity laws and
regulations, yet still not be secure. Conversely, an organization can be secure,
yet not fully compliant. To further complicate this point, the compliance and
security functions in many organizations are often defined and supervised by
separate entities.

Standards and Regulations


Organizations worldwide handle huge amounts of customer data and personal
information, making them a prime target for cyber criminals. New standards
regulations are being enacted to protect and secure this data.

Payment Card Industry’s Data Security Standard


The Payment Card Industry's Data Security Standard (PCI DSS) establishes its own
cybersecurity standards and best practices for businesses and organizations that
allow payment card purchases. An ever-increasing number of international,
multinational, federal, regional, state, and local laws and regulations also
mandate numerous cybersecurity and data protection requirements for businesses and
organizations worldwide.
European Union General Data Protection Regulations
The European Union (EU) General Data Protection Regulations (GDPR) apply to any
organization that does business with EU citizens. GDPR regulations often apply more
stringent standards for end user and data protections than those that are applied
domestically. Some domestic companies have adopted a policy of complying with GDPR
regulations, just in case their operations may interact with European or
international consumers.

2
of
4
Introduction
Attacker Profiles and Cyberattack Lifecycle

Introduction
Understanding the Modern Cybersecurity Landscape
Attacker Profiles and Cyberattack Lifecycle
Knowledge Check

Close

Cybersecurity Fundamentals

Attacker Profiles and


Cyberattack Lifecycle

This lesson provides an overview of different attacker profiles and their


motivations in cyberattacks. It explores the strategies and techniques employed by
cybercriminals, state-affiliated groups, hacktivists, cyberterrorists, script
kiddies, and cybercrime vendors. The lesson also describes the stages of the
cyberattack lifecycle.

Attacker Profiles
News outlets are usually quick to showcase high-profile attacks, but the sources of
these attacks is not always easy to identify. Each of the different attacker types
or profiles generally has a specific motivation for the attacks they generate.

Here are some traditional attacker profile types. Because these different attacker
profiles have different motivations, information security professionals must design
cybersecurity defenses that can identify the different attacker motivations and
apply appropriate deterrents. Click the arrows for more information about the
profile type of each attacker.

Cybercriminals
Cybercriminals are the most common attacker profile. The dramatic increase in the
number of ransomware attacks over the last five years generally is attributed to
cybercriminal groups, which are also invested in other crime-for-profit activities.
They are also known for the proliferation of bots and botnet attacks, where
endpoints are infected and then organized collectively by a command-and-control, or
C&C, attack server.

Cyberattack Lifecycle
Modern cyberattack strategy has evolved from a direct attack against a high-value
server or asset (“shock and awe”) to a patient, multistep process that blends
exploits, malware, stealth, and evasion in a coordinated network attack (“low and
slow”).

The cyberattack lifecycle illustrates the sequence of events that an attacker goes
through to infiltrate a network and exfiltrate (or steal) valuable data. Blocking
just one step breaks the chain and can effectively defend an organization’s network
and data against an attack.

Click the arrows for more information about the particular attack lifecycle.

Reconnaissance (Attack)

Like common criminals, attackers meticulously plan their cyberattacks. They


research, identify, and select targets, often extracting public information from
targeted employees’ social media profiles or from corporate websites, which can be
useful for social engineering and phishing schemes. Attackers will also use various
tools to scan for network vulnerabilities, services, and applications that they can
exploit, such as network analyzers, network vulnerability scanners, password
crackers, port scanners, web application vulnerability scanners, and Wi-Fi
vulnerability scanners.

Reconnaissance (Defense)

Breaking the cyberattack lifecycle at this phase of an attack begins with proactive
and effective end-user security awareness training that focuses on topics such as
social engineering techniques (for example, phishing, piggybacking, and shoulder
surfing), social media (for example, safety and privacy issues), and organizational
security policies (for example, password requirements, remote access, and physical
security). Another important countermeasure is continuous monitoring and inspection
of network traffic flows to detect and prevent unauthorized port and vulnerability
scans, host sweeps, and other suspicious activity. Effective change and
configuration management processes help to ensure that newly deployed applications
and endpoints are properly configured (for example, disabling unneeded ports and
services) and maintained.

Weaponization (Attack)

Attackers determine which methods to use to compromise a target endpoint. They may
choose to embed intruder code within seemingly innocuous files such as a PDF or
Microsoft Word document or email message. Or, for highly targeted attacks,
attackers may customize deliverables to match the specific interests of an
individual within the target organization.

Weaponization (Defense)
Breaking the cyberattack lifecycle at this phase of an attack is challenging
because weaponization typically occurs within the attacker’s network. However,
analysis of artifacts (both malware and weaponizer) can provide important threat
intelligence to enable effective zero-day protection when delivery (the next step)
is attempted.

Delivery (Attack)

Attackers next attempt to deliver their weaponized payload to a target endpoint via
email, IM, drive-by download (an end user’s web browser is redirected to a webpage
that automatically downloads malware to the endpoint in the background), or
infected file share.

Delivery (Defense)

Breaking the cyberattack lifecycle at this phase of an attack requires visibility


into all network traffic (including remote and mobile devices) to effectively block
malicious or risky websites, applications, and IP addresses and prevent known and
unknown malware and exploits.

Exploitation (Attack)

After a weaponized payload is delivered to a target endpoint, it must be triggered.


An end user may unwittingly trigger an exploit by clicking a malicious link or
opening an infected attachment in an email. An attacker also may remotely trigger
an exploit against a known server vulnerability on the target network.

Exploitation (Defense)

Breaking the cyberattack lifecycle at this phase of an attack begins with proactive
and effective end-user security awareness training that focuses on topics such as
malware prevention and email security. Other important security countermeasures
include vulnerability and patch management; malware detection and prevention;
threat intelligence (including known and unknown threats); blocking risky,
unauthorized, or unneeded applications and services; managing file or directory
permissions and root or administrator privileges; and logging and monitoring
network activity.

Installation (Attack)

Next, an attacker will escalate privileges on the compromised endpoint, for


example, by establishing remote shell access and installing rootkits or other
malware. With remote shell access, the attacker has control of the endpoint and can
execute commands in privileged mode from a command-line interface (CLI) as if
physically sitting in front of the endpoint. The attacker will then move laterally
across the target’s network, executing attack code, identifying other targets of
opportunity, and compromising additional endpoints to establish persistence.

Installation (Defense)

The key to breaking the cyberattack lifecycle at this phase of an attack is to


limit or restrict the attackers’ lateral movement within the network. Use network
segmentation and a Zero Trust model that monitors and inspects all traffic between
zones or segments and provides granular control of applications that are allowed on
the network.

Command and Control (Attack)

Attackers establish encrypted communication channels back to command-and-control


(C2) servers across the internet so that they can modify their attack objectives
and methods as additional targets of opportunity are identified within the victim
network, or to evade any new security countermeasures that the organization may
attempt to deploy if attack artifacts are discovered. Communication is essential to
an attack because it enables the attacker to remotely direct the attack and execute
the attack objectives. C2 traffic must therefore be resilient and stealthy for an
attack to succeed. Attack communication traffic is usually hidden with various
techniques and tools, including encryption, circumvention, port evasion, fast flux
(or Dynamic DNS), and DNS tunneling.

Command and Control (Defense)

Breaking the cyberattack lifecycle at this phase of an attack requires:

Inspecting all network traffic (including encrypted communications)


Blocking outbound C2 communications with anti-C2 signatures (along with file and
data pattern uploads)
Blocking all outbound communications to known malicious URLs and IP addresses
Blocking novel attack techniques that employ port evasion methods
Preventing the use of anonymizers and proxies on the network
Monitoring DNS for malicious domains and countering with DNS sinkholing or DNS
poisoning
Redirecting malicious outbound communications to honeypots to identify or block
compromised endpoints and analyze attack traffic
Act on Objective (Attack)

Attackers often have multiple, different attack objectives, including data theft;
destruction or modification of critical systems, networks, and data; and denial-of-
service (DoS). This last stage of the cyberattack lifecycle can also be used by an
attacker to advance the early stages of the lifecycle against another target.
Act on Objective (Defense)

Monitoring and awareness are the primary defense actions performed at this phase.
The 2018 Verizon Data Breach Investigations Report (DBIR) describes this strategy
as a secondary motive in which web applications are compromised to aid and abet in
the attack of another victim. For example, an attacker may compromise a company’s
extranet to breach a business partner who is the primary target.
According to the DBIR, in 2014 there were 23,244 incidents where web applications
were compromised with a secondary motive. The attacker pivots the attack against
the initial victim network to a different victim network, thus making the initial
victim an unwitting accomplice.

High-Profile Attacks
The goals of attackers have changed dramatically. Their goals are mostly associated
with financial gain.

Video: High-Profile Attacks


Watch the video for more information about the scope or scale of the high-profile
attacks that have occurred.

Elapsed time0:00/Total1:20

High-Profile Cyberattacks
The following are the different types of high-profile cyberattacks:

SolarWinds
In December 2020, the cybersecurity firm FireEye and the U.S. Treasury Department
both reported attacks involving malware in a software update to their SolarWinds
Orion Network Management System perpetrated by the APT29 (Cozy Bear/Russian SVR)
threat group. This attack is one of the most damaging supply chain attacks in
history, potentially impacting more than 300,000 SolarWinds customers, including
the U.S. federal government and 425 of the Fortune 500 companies.

Colonial Pipeline
In May 2021, the Colonial Pipeline Company – which operates one of the largest fuel
pipelines in the U.S. – was hit by the DarkSide threat actor group with a
Ransomware-as-a-Service (RaaS) attack. Although the company acted quickly to shut
down its network systems and paid the $4.4 million ransom, operations were not
fully restored for six days, which caused major fuel shortages and other supply
chain issues along the U.S. eastern seaboard. Additionally, the personal
information –including the health insurance information, social security numbers,
driver’s licenses, and military identification numbers – of nearly 6,000
individuals were compromised.

JBS S.A.
In May 2021, Brazil-based JBS S.A. – the largest producer of beef, chicken, and
pork worldwide – was hit by a ransomware attack attributed to the REvil threat
actor group. Although the company paid the $11 million ransom, its U.S. and
Australia beef processing operations were shut down for a week.

Government of Ukraine
In January 2022, several Ukrainian government websites including the ministry of
foreign affairs and the education ministry were hacked by suspected Russian
attackers. Threatening messages were left on the websites during a period of
heightened tensions between the governments of Ukraine and Russia.

MITRE ATT&CK Framework


The MITRE Adversarial Tactics, Techniques, and Common Knowledge (ATT&CK) framework
is a comprehensive matrix of tactics and techniques designed for threat hunters,
defenders, and red teams to help classify attacks, identify attack attribution and
objective, and assess an organization's risk. Organizations can use the framework
to identify security gaps and prioritize mitigations based on risk.

MITRE Started ATT&CK Against Enterprise Networks


MITRE started ATT&CK in 2013 to document the tactics, techniques and procedures
(TTPs) that advanced persistent threats (APTs) use against enterprise networks. It
was created out of a need to describe adversary TTPs that would be used by a MITRE
research project called FMX. The objective of FMX was to investigate how endpoint
telemetry data and analytics could help improve post-intrusion detection of
attackers operating within enterprise networks. The ATT&CK framework was used as
the basis for testing the efficacy of the sensors and analytics under FMX and
served as the common language both offense and defense could use to improve over
time. Click the tabs for more information about three iterations of MITRE ATT&CK.

ATT&CK for Enterprise


ATT&CK for Mobile
Pre-ATT&CK
Focuses on adversarial behavior in Windows, Mac, Linux, and cloud environments

Sub-Techniques
Sub-techniques are a more specific description of the adversarial behavior used to
achieve a goal. They describe behavior at a lower level than a technique. For
example, an adversary may dump credentials by accessing the Local Security
Authority (LSA) secrets.

Supply-Chain Management
Following are the highlighted practices in cyber supply chain management.

End-to-end risk management


All products are subject to Palo Alto Networks’ end-to-end product security
framework, which was designed to provide defense in depth for each stage of the
product lifecycle.

Continuous improvement
Cyber Supply Chain Risk Management (C-SCRM) processes must rapidly adapt to changes
in the threat landscape. Palo Alto Networks’ cross-functional security council
rebalances the C-SCRM program’s security priorities every six months.

Public-private partnerships
Palo Alto Networks participates in multiple voluntary public-private partnerships,
including the Department of Homeland Security’s Information and Communications
Technology Supply Chain Risk Management Task Force and the U.S. Customs and Border
Protection's Customs-Trade Partnership Against Terrorism. These programs encourage
Palo Alto Networks’ suppliers and the broader security community to develop robust
supply chain and cybersecurity practices.

Contract manufacturers simplify cyber supply chain risk management


Managing thousands of supplier relationships is challenging for organizations at
every maturity level. Palo Alto Networks leverages established contract
manufacturers with demonstrably diligent cybersecurity and supplier risk management
programs.
Common Vulnerabilities and Exposures
Common Vulnerabilities and Exposures (CVE) is a system for referencing publicly
known vulnerabilities by identifiers. The goal of the system is to make it easier
to share vulnerability data across stakeholders, including software vendors, tool
vendors, security practitioners, and end users.

To evaluate the extent and severity of each CVE across your endpoints, you can
drill down into each CVE in Cortex XDR and view all the endpoints and applications
in your environment impacted by the CVE.

Cortex XDR retrieves the latest information from the NIST public database. From
Add-ons > Host Insights > Vulnerability Assessment, select CVEs on the upper-right
bar. For each vulnerability, Cortex XDR displays default and optional values.

You can click each individual CVE to view in-depth details about it on a panel that
appears on the right.

Common Vulnerability Scoring System


The Common Vulnerability Scoring System (CVSS) offers a method for enumerating a
vulnerability's key characteristics and generating a numerical score that reflects
the vulnerability's severity.

To assist organizations in correctly evaluating and prioritizing their


vulnerability management processes, the numerical score can then be converted into
a qualitative representation (such as low, medium, high, and critical).

Course Summary
Please ensure all knowledge check questions have been answered in order for this
course to be marked complete.

Now that you have completed this course, you should be able to:

Describe the current cybersecurity landscape

Identify the risks and security challenges associated with SaaS applications

Describe various security and data protection regulations and standards

Identify and classify different attacker profiles

Illustrate the different phases of the cyberattack lifecycle

3
of
4
Understanding the Modern Cybersecurity Landscape
Knowledge Check

Introduction
Malware Types and Advanced Malware
Ransomware, Vulnerabilities, and Exploits
Knowledge Check

Close

Cybersecurity Fundamentals

Malware Types and


Advanced Malware

Attackers use a variety of techniques and attack types to achieve their objectives.
Malware and exploits are integral to the modern cyberattack strategy. This lesson
describes the different malware types and advanced malware properties.

Malware and Ransomware


Malware (short for “malicious software”) is a file or code that typically takes
control of, collects information from, or damages an infected endpoint.

Definitions and Objectives


Malware is an inclusive term for all types of malicious software.

Malware
Malware usually has one or more of the following objectives: to provide remote
control for an attacker to use an infected machine, to send spam from the infected
machine to unsuspecting targets, to investigate the infected user’s local network,
and to steal sensitive data.

Advanced/Modern Malware
Advanced or modern malware generally refers to new or unknown malware. These types
of malware are highly sophisticated and often have specialized targets. Advanced
malware typically can bypass traditional defenses.

Malware Types
Malware is varied in type and capabilities. Let's review several malware types.

Click the arrows for more information about the malware types.

Logic Bombs

A logic bomb is malware that is triggered by a specified condition, such as a given


date or a particular user account being disabled.

Spyware and Adware

Spyware and adware are types of malware that collect information, such as internet
surfing behavior, login credentials, and financial account information, on an
infected endpoint. Spyware often changes browser and other software settings and
slows computer and internet speeds on an infected endpoint. Adware is spyware that
displays annoying advertisements on an infected endpoint, often as pop-up banners.

Rootkits

A rootkit is malware that provides privileged (root-level) access to a computer.


Rootkits are installed in the BIOS of a machine, which means operating system-level
security tools cannot detect them.

Bootkits

A bootkit is malware that is a kernel-mode variant of a rootkit, commonly used to


attack computers that are protected by full-disk encryption.

Backdoors

A backdoor is malware that allows an attacker to bypass authentication to gain


access to a compromised system.

Anti-AV

Anti-AV is malware that disables legitimately installed antivirus software on the


compromised endpoint, thereby preventing automatic detection and removal of other
malware.

Ransomware

Ransomware is malware that locks a computer or device (Locker ransomware) or


encrypts data (Crypto ransomware) on an infected endpoint with an encryption key
that only the attacker knows, thereby making the data unusable until the victim
pays a ransom (usually with cryptocurrency, such as Bitcoin). Reveton and LockeR
are two examples of Locker ransomware. Locky, TeslaCrypt/EccKrypt, Cryptolocker,
and Cryptowall are examples of Crypto ransomware.

Trojan Horses

A Trojan horse is malware that is disguised as a harmless program but actually


gives an attacker full control and elevated privileges of an endpoint when
installed. Unlike other types of malware, Trojan horses are typically not self-
replicating.

Virus

A virus is malware that is self-replicating but must first infect a host program
and be executed by a user or process.

Worms

A worm is malware that typically targets a computer network by replicating itself


to spread rapidly. Unlike viruses, worms do not need to infect other programs and
do not need to be executed by a user or process.

Advanced or Modern Malware


Modern malware is stealthy and evasive. It plays a central role in a coordinated
attack against a target.
Advanced or modern malware leverages networks to gain power and resilience. Modern
malware can be updated—just like any other software application—so that an attacker
can change course and dig deeper into the network or make changes and enact
countermeasures.

This is a fundamental shift compared to earlier types of malware, which were


generally independent agents that simply infected and replicated themselves.

Video: Advanced Malware Evolution


Watch the video for more information about the evolution of advanced malware.

Elapsed time0:00/Total0:37

Types of Advanced or Modern Malware


Below are some characteristics of the more advanced malware.

Obfuscation
Obfuscation
Polymorphism
Polymorphism
Distributed
Distributed
Multi-functional
Multi-functional
2
of
4
Introduction
Ransomware, Vulnerabilities, and Exploits

Introduction
Malware Types and Advanced Malware
Ransomware, Vulnerabilities, and Exploits
Knowledge Check

Close

Cybersecurity Fundamentals

Ransomware, Vulnerabilities,
and Exploits

This course provides an in-depth understanding of malware and ransomware, their


types, objectives, and properties. It also provides an overview of the relationship
between vulnerabilities and exploits and explores the concept of ransomware and its
various types.

Ransomware Types
Although cryptographic ransomware is the most common and successful type of
ransomware, it is not the only one. It’s important to remember that ransomware is
not a single family of malware but is a criminal business model in which malware is
used to hold something of value for ransom.

Advanced Introduction to Ransomware


While holding something of value for ransom is not a new concept, ransomware has
become a multibillion-dollar criminal business targeting both individuals and
corporations. Due to its low barriers to entry and effectiveness in generating
revenue, it has quickly displaced other cybercrime business models and has become
the largest threat facing organizations today. It is also important to note that
although threat actors generally do decrypt your data after the ransom is paid (the
ransomware business model depends on a reasonable expectation that paying a ransom
will restore access to your data), there are no guarantees that this will be the
case. Additionally, many threat actors are now exfiltrating a copy of their
victims’ data – particularly PII and credit card numbers – before encrypting it,
then selling the data on the dark web after the ransom is paid.

Ransomware Attacks on Organizations


Though the malware deployed in the current generation of cryptographic ransomware
attacks is not especially sophisticated, it has proven very effective at not only
generating revenue for the criminal operators but also preventing impacted
organizations from continuing their normal operations. New headlines each week
demonstrate that organizations large and small are vulnerable to these threats,
enticing new attackers to jump onto the bandwagon and begin launching their own
ransomware campaigns.

Attacker's Execute Five Steps


For a ransomware attack to be successful, attackers must execute the following five
steps.

If the attacker fails in any of these steps, the scheme will be unsuccessful.
Although the concept of ransomware has existed for decades, the technology and
techniques, such as reliable encrypting and decrypting, required to complete all
five of these steps on a wide scale were not available until just a few years ago.
Click the arrows for more information about the five steps.

Step 1: Compromise and Control a System or Device

Ransomware attacks typically begin by using social engineering to trick users into
opening an attachment or viewing a malicious link in their web browser. This allows
attackers to install malware onto a system and take control. However, another
increasingly common tactic is for attackers to gain access to the network, perform
reconnaissance on the network to identify potential targets and establish Command
and Control (C2), install other malware and create backdoor accounts for
persistence, and potentially exfiltrate data.

Step 2: Prevent Access to the System

Attackers will either identify and encrypt certain file types or deny access to the
entire system.

Step 3: Notify Victim


Though seemingly obvious, attackers and victims often speak different languages and
have varying levels of technical capabilities. Attackers must alert the victim
about the compromise, state the demanded ransom amount, and explain the steps for
regaining access.

Step 4: Accept Ransom Payment

To receive payment while evading law enforcement, attackers utilize


cryptocurrencies such as Bitcoin for the transaction.

Step 5: Return Full Access

Attackers must return access to the device(s). Failure to restore the compromised
systems destroys the effectiveness of the scheme as no one would be willing to pay
a ransom if they didn’t believe their valuables would be returned.

Vulnerabilities and Exploits


Vulnerabilities and exploits can be leveraged to force software to act in ways it’s
not intended to, such as gleaning information about the current security defenses
in place.

Vulnerability
Vulnerabilities are routinely discovered in software at an alarming rate.
Vulnerabilities may exist in software when the software is initially developed and
released, or vulnerabilities may be inadvertently created, or even reintroduced,
when subsequent version updates or security patches are installed.

Exploit
An exploit is a type of malware that takes advantage of a vulnerability in an
installed endpoint or server software such as a web browser, Adobe Flash, Java, or
Microsoft Office. An attacker crafts an exploit that targets a software
vulnerability, causing the software to perform functions or execute code on behalf
of the attacker.

Patching Vulnerabilities
Security patches are developed by software vendors as quickly as possible after a
vulnerability has been discovered in their software.

1. Discovery
An attacker may learn of a vulnerability and begin exploiting it before the
software vendor is aware of the vulnerability or has an opportunity to develop a
patch.

2. Development of Patch
The delay between the discovery of a vulnerability and development and release of a
patch is known as a zero-day threat (or exploit).
3. Test and Deploy Patch
It may be months or years before a vulnerability is announced publicly. After a
security patch becomes available, time inevitably is required for organizations to
properly test and deploy the patch on all affected systems. During this time, a
system running the vulnerable software is at risk of being exploited by an
attacker.

How Exploits Are Executed


Exploits can be embedded in seemingly innocuous data files (such as Microsoft Word
documents, PDF files, and webpages), or they can target vulnerable network
services. Exploits are particularly dangerous because they are often packaged in
legitimate files that do not trigger anti-malware (or antivirus) software and are
therefore not easily detected. Click the tabs in each step for more information
about how exploits are executed.

1. Creation
2. Action
3. Techniques
4. Heap Spray
Creation of an exploit data file is a two-step process. The first step is to embed
a small piece of malicious code within the data file. However, the attacker still
must trick the application into running the malicious code. Thus, the second part
of the exploit typically involves memory corruption techniques that allow the
attacker’s code to be inserted into the execution flow of the vulnerable software.

Timeline of Eliminating a Vulnerability


Vulnerabilities can be exploited from the time software is deployed until it is
patched. Click the arrows for more information about the timeline to eliminate a
vulnerability.

1. Software Deployed

For local systems, the only way to eliminate vulnerabilities is to effectively


patch systems and software.

2. Vulnerability Discovered

Security patches are developed by software vendors as quickly as possible after a


vulnerability has been discovered in their software.

3. Exploits Begin

The process of discovery and patching will continue. According to research by Palo
Alto Networks, 78 percent of exploits take advantage of vulnerabilities that are
less than two years old, which implies that developing and applying patches is a
lengthy process.

4. Public Announcement of Vulnerability

An attacker may learn of a vulnerability and begin exploiting it before the


software vendor is aware of the vulnerability or has an opportunity to develop a
patch.

5. Patch Released

This delay between the discovery of a vulnerability and development and release of
a patch is known as a zero-day threat (or exploit).

6. Patch Deployed

Months or years could pass by before a vulnerability is announced publicly. After a


security patch becomes available, time inevitably is required for organizations to
properly test and deploy the patch on all affected systems.

7. Protected by Vendor Patch

During this time, a system running the vulnerable software is at risk of being
exploited by an attacker.

Course Summary
Please ensure all knowledge check questions have been answered in order for this
course to be marked complete.

Now that you have completed this course, you should be able to:

Identify the objectives and properties of different malware types


Describe advanced malware and its role in coordinated attacks
Describe the concepts of vulnerabilities and exploits and their role in
cyberattacks
Identify ransomware types and the steps for a successful ransomware attack

3
of
4
Malware Types and Advanced Malware
Knowledge Check

Introduction
Cyberattack Techniques
Advanced Persistent Threats and Wi-Fi Vulnerabilities
Knowledge Check

Close

Cybersecurity Fundamentals

Cyberattack
Techniques

Attackers use a variety of techniques and attack types to achieve their objectives.
Spamming and phishing are commonly employed techniques to deliver malware and
exploits to an endpoint via an email executable or a web link to a malicious
website. Once an endpoint is compromised, an attacker typically installs back
doors, remote access Trojans (RATs), and other malware to ensure persistence. This
lesson describes spamming and phishing techniques, how bots and botnets function,
and the different types of botnets.
Business Email Compromise (BEC)
Business email compromise (BEC) is one of the most prevalent types of cyberattacks
that organizations face today. The FBI Internet Crime Complaint Center (IC3)
estimates that "in aggregate" BEC attacks cost organizations three times more than
any other cybercrime and BEC incidents represented nearly a third of the incidents
investigated by Palo Alto Networks Unit 42 Incident Response Team. According to the
Verizon Data Breach Investigations Report (DBIR), BEC is the second most common
form of social engineering today.

Spam and phishing emails are the most common delivery methods for malware. The
volume of spam email as a percentage of total global email traffic fluctuates
widely from month to month – typically 45 to 75 percent. Although most end users
today are readily able to identify spam emails and are savvier about not clicking
links, opening attachments, or replying to spam emails, spam remains a popular and
effective infection vector for the spread of malware. Phishing attacks, in contrast
to spam, are becoming more sophisticated and difficult to identify.

Phishing Attacks
We often think of spamming and phishing as the same thing, but they are actually
separate processes, and they each require their own mitigations and defenses.
Phishing attacks, in contrast to spam, are becoming more sophisticated and
difficult to identify.

Spear Phishing
Spear phishing is a targeted phishing campaign that appears more credible to its
victims by gathering specific information about the target, giving it a higher
probability of success. A spear phishing email may spoof an organization (such as a
financial institution) or individual that the recipient actually knows and does
business with. It may also contain very specific information (such as the
recipient’s first name, rather than just an email address).

Spear phishing, and phishing attacks in general, are not always conducted via
email. A link is all that is required, such as a link on Facebook or a message
board or a shortened URL on Twitter. These methods are particularly effective in
spear phishing attacks because they allow the attacker to gather a great deal of
information about the targets and then lure them through dangerous links into a
place where the users feel comfortable.

Whaling
Whaling is a type of spear phishing attack that is specifically directed at senior
executives or other high-profile targets within an organization. A whaling email
typically purports to be a legal subpoena, customer complaint, or other serious
matter.

Watering Hole
Watering hole attacks compromise websites that are likely to be visited by a
targeted victim-for example, an insurance company website that may be frequently
visited by healthcare providers. The compromised website will typically infect
unsuspecting visitors with malware (known as a “drive-by download”).
Pharming
A pharming attack redirects a legitimate website’s traffic to a fake site,
typically by modifying an endpoint’s local hosts file or by compromising a DNS
server (DNS poisoning).

Bots and Botnets


Bots and botnets are notoriously difficult for organizations to detect and defend
against using traditional anti-malware solutions.

Click each tab for more information about bots and botnets.

Bots

Botnets

Instances of Bots and Botnets

Flexibility and Ability

Disabling a Botnet
Botnets themselves are dubious sources of income for cybercriminals. Botnets are
created by cybercriminals to harvest computing resources (bots). Control of botnets
(through C2 servers) can then be sold or rented out to other cybercriminals.

Challenges to Disabling a Botnet


The key to “taking down” or “decapitating” a botnet is to separate the bots
(infected endpoints) from their brains (C2 servers). If the bots cannot get to
their servers, they cannot get new instructions, upload stolen data, or do anything
that makes botnets so unique and dangerous. Although this approach may seem
straightforward, disabling a botnet presents many challenges.

Click the tabs for more information about the challenges that may occur while
disabling a botnet.

Resources
Servers
Redundancy
Quick Recovery
DDoS Attacks
Extensive resources are typically required to map the distributed C2 infrastructure
of a botnet. Mapping a botnet's infrastructure almost always requires an enormous
amount of investigation, expertise, and coordination between numerous industry,
security, and law enforcement organizations worldwide.

Actions for Disabling a Botnet


The following are actions for disabling a botnet. Note: Effectively deterring a
botnet infection may be an ongoing process.

Disabling Internet Access


Disabling internet access is a highly recommended first action, along with
aggressively monitoring local network activity to identify the infected devices.
The first response to discovery of infected devices is to remove them from the
network, thus severing any connections to a C2 server and keeping the infection
from spreading.

Monitor Local Network Activity


The next response is to ensure that current patches and updates are applied. If
infected endpoints are still persistently attempting to connect to a C2 service or
an attack target, then the endpoints should be imaged and cleansed.

Remove Infected Devices and Botnet Software


Effective deterrent of a botnet infection may be an ongoing process. Devices may
return to a dormant state and appear to be clean of infection for prolonged periods
of time, only to one day be “awakened” by a signal from a C2 service.

Install Current Patches


The Internet Service Provider (ISP) community has a commitment to securing internet
backbones and core services known as the Shared Responsibility Model. Adhering to
this model does not ensure that ISP providers can fully identify and disable C2
service clusters. Full termination of C2 architecture can be extremely difficult.

Spamming Botnets
The largest botnets are often dedicated to sending spam. The premise is
straightforward: The attacker attempts to infect as many endpoints as possible, and
the endpoints can then be used to send out spam email messages without the end
users’ knowledge.

Productivity
Productivity
Reputation
Reputation
Example: Rustock Botnet
The Rustock botnet is an example of a spamming botnet. Rustock could send up to
25,000 spam email messages per hour from an individual bot. At its peak, it sent an
average of 192 spam emails per minute per bot. Rustock is estimated to have
infected more than 2.4 million computers worldwide. In March 2011, the U.S. Federal
Bureau of Investigation (FBI), working with Microsoft and others, was able to take
down the Rustock botnet. By then, the botnet had operated for more than five years.
At the time, it was responsible for sending up to 60 percent of the world’s spam.

Distributed Denial-of-Service Attack


A DDoS attack is a type of cyberattack in which extremely high volumes of network
traffic such as packets, data, or transactions are sent to the target victim’s
network to make their network and systems (such as an e-commerce website or other
web application) unavailable or unusable.

Click the arrow for more information about how DDoS attacks are used and their
impact on an organization.

Use of Bots

A DDoS botnet uses bots as part of a DDoS attack, overwhelming a target server or
network with traffic from a large number of bots. In such attacks, the bots
themselves are not the target of the attack. Instead, the bots are used to flood
some other remote target with traffic. The attacker leverages the massive scale of
the botnet to generate traffic that overwhelms the network and server resources of
the target.
Financial Botnets
Financial botnets, such as ZeuS and SpyEye, are responsible for the direct theft of
funds from all types of enterprises. These types of botnets are typically not as
large as spamming or DDoS botnets, which grow as large as possible for a single
attacker. Click the tabs for more information about where financial botnets are
sold and their impact.

Existence

Impact

Difference between DoS and DDoS


A Denial-of-Service (DoS) attack is an attack meant to shut down a machine or
network, making it inaccessible to its intended users. Distributed Denial of
Service (DDoS) attack is an additional type of DoS attack. A DDoS attack occurs
when multiple systems orchestrate a synchronized DoS attack to a single target. The
essential difference is that instead of being attacked from one location, the
target is attacked from many locations at once.

Though the DDoS attack is a type of DoS attack, it is significantly more popular in
its use due to the features that differentiate and strengthen it from other types
of DoS attacks.

2
of
4
Introduction
Advanced Persistent Threats and Wi-Fi Vulnerabilities

Introduction
Cyberattack Techniques
Advanced Persistent Threats and Wi-Fi Vulnerabilities
Knowledge Check

Close

Cybersecurity Fundamentals

Advanced Persistent Threats


and Wi-Fi Vulnerabilities

With the explosive growth in fixed and mobile devices over the past decade,
wireless (Wi-Fi) networks are growing exponentially—and so is the attack surface
for advanced persistent threats (ATP). This lesson describes Wi-Fi vulnerabilities
and attacks and APTs.

Advanced Persistent Threats


Advanced persistent threats, or APTs, are a class of threats that are far more
deliberate and potentially devastating than other types of cyberattacks. APTs are
generally coordinated events that are associated with cybercriminal groups.
Click the tabs for more information about the different types of APTs.

Advanced

Persistent

Threat

Example: Lazarus
Attacks against nation-states and corporations are common, and the group of
cybercriminals that may have done the most damage is Lazarus. The Lazarus group is
known as an APT. The Lazarus group has been known to operate under different names,
including Bluenoroff and Hidden Cobra. They were initially known for launching
numerous attacks against government and financial institutions in South Korea and
Asia. In more recent years, the Lazarus group has been targeting banks, casinos,
financial investment software developers, and crypto-currency businesses. The
malware attributed to this group recently has been found in 18 countries around the
world.

Wi-Fi Challenges
A security professional's first concern may be whether a Wi-Fi network is secure.
However, for the average user, the unfortunate reality is that Wi-Fi connectivity
is more about convenience than security.

Security professionals must secure Wi-Fi networks—but they must also protect the
mobile devices their organization’s employees use to perform work and access
potentially sensitive data, no matter where they are or whose network they’re on.

Public Airwaves
Wi-Fi is conducted over public airwaves. The 2.4GHz and 5GHz frequency ranges that
are set aside for Wi-Fi communications are also shared with other technologies,
such as Bluetooth. As a result, Wi-Fi is extremely vulnerable to congestion and
collisions.

Wi-Fi Network
Additional problems exist because Wi-Fi device settings and configurations are well
known, published openly, shared, and even broadcast. To begin securing a WLAN
network, you should disable the Service Set Identifier Broadcast configuration. If
the SSID is configured to broadcast, it is easier for an attacker to define simple
attack targets and postures because the network is already discoverable.

Mobile Device & Customer Apps


Mobile devices themselves have significant vulnerabilities. Mobile device
management is difficult to maintain when end users use "bring-your-own-device"
features. For many users, patching and securing their mobile devices is an
afterthought, and they often consider convenience and performance before security.
End users often install apps that bring significant risk to both the device and the
network, and they often disable security features that impact device performance.

Wireless Security
Wi-Fi security begins—and ends—with authentication. An organization cannot protect
its digital assets if it cannot control who has access to its wireless network.

Apply Effective Wireless Security


Wi-Fi and wireless connected devices present additional challenges that might not
be considered with wired networks.

Limit Access Through Authentication


Limit Access Through Authentication
Secure Content via Encryption
Secure Content via Encryption
Participate in Known or Constrained Networks
Participate in Known or Constrained Networks
Wi-Fi Protected Access
WLAN networks that do not subscribe to an 802.1x model may still experience
authentication challenges.

Security Protocols
The Wi-Fi Protected Access (WPA) security standard was published as an interim
standard in 2004, quickly followed by WPA2. WPA/WPA2 contain improvements to
protect against the inherent flaws in the Wired Equivalent Privacy (WEP), including
changes to the encryption.

WEP
The WEP encryption standard is no longer secure enough for Wi-Fi networks. WPA2 and
the emerging WPA3 standards provide strong encryption capabilities and manage
secure authentication via the 802.1x standard.

As mobile device processors have advanced to handle 64-bit computing, AES as a


scalable symmetric encryption algorithm solves the problems of managing secure,
encrypted content on mobile devices.

WPA2
WPA2-PSK supports 256-bit keys, which require 64 hexadecimal characters.

Because requiring users to enter a 64-hexadecimal character key is impractical,


WPA2 includes a function that generates a 256-bit key based on a much shorter
passphrase created by the administrator of the Wi-Fi network and the SSID of the AP
used as a salt (random data) for the one-way hash function.

WPA3
WPA3 was published in 2018. Its security enhancements include more robust
bruteforce attack protection, improved hotspot and guest access security, simpler
integration with devices that have limited or no user interface (such as IoT
devices), and a 192-bit security suite. Newer Wi-Fi routers and client devices will
likely support both WPA2 and WPA3 to ensure backward compatibility in mixed
environments.

According to the Wi-Fi Alliance, WPA3 features include improved security for IoT
devices such as smart bulbs, wireless appliances, smart speakers, and other screen-
free gadgets that make everyday tasks easier.

Evil Twin
Perhaps the easiest way for an attacker to find a victim to exploit is to set up a
wireless access point that serves as a bridge to a real network. An attacker can
inevitably bait a few victims with “free Wi-Fi access.”

Baiting a victim with free Wi-Fi access requires a potential victim to stumble on
the access point and connect. The attacker can’t easily target a specific victim,
because the attack depends on the victim initiating the connection. Attackers now
try to use a specific name that mimics a real access point. Click the arrows for
more information about how the Evil Twin attack is executed.

1. Mimic a real access point

A variation on this approach is to use a more specific name that mimics a real
access point normally found at a particular location–the Evil Twin. For example, if
a local airport provides Wi-Fi service and calls it “Airport Wi-Fi,” the attacker
might create an access point with the same name using an access point that has two
radios.

2. Catch a greater number of users

Average users cannot easily discern when they are connected to a real access point
or a fake one, so this approach would catch a greater number of users than a method
that tries to attract victims at random. Still, the user has to select the network,
so a bit of chance is involved in trying to reach a particular target.

3. Reach a large number of people; not targeted

The main limitation of the Evil Twin attack is that the attacker can’t choose the
victim. In a crowded location, the attacker will be able to get a large number of
people connecting to the wireless network to unknowingly expose their account names
and passwords. However, it’s not an effective approach if the goal is to target
employees in a specific organization.

Jasager
To understand a more targeted approach than the Evil Twin attack, think about what
happens when you bring your wireless device back to a location that you’ve
previously visited.

Video: Jasager Attack


When you bring your laptop home, you don’t have to choose which access point to
use, because your device remembers the details of wireless networks to which it has
previously connected. The same goes for visiting the office or your favorite coffee
shop.

Watch the video for more information about a normal wireless device connectivity
scenario and a Jasager attack scenario.

Elapsed time0:00/Total2:27
SSLstrip
After a user connects to a Wi-Fi network that’s been compromised–or to an
attacker’s Wi-Fi network masquerading as a legitimate network–the attacker can
control the content that the victim sees. The attacker simply intercepts the
victim’s web traffic, redirects the victim’s browser to a web server that it
controls, and serves up whatever content the attacker desires.

Emotet
Emotet is a Trojan, first identified in 2014, that has long been used in spam
botnets and ransomware attacks. Recently, it was discovered that a new Emotet
variant is using a Wi-Fi spreader module to scan Wi-Fi networks looking for
vulnerable devices to infect. The Wi-Fi spreader module scans nearby Wi-Fi networks
on an infected device and then attempts to connect to vulnerable Wi-Fi networks via
a brute-force attack. After successfully connecting to a Wi-Fi network, Emotet then
scans for non-hidden shares and attempts another brute-force attack to guess
usernames and passwords on other devices connected to the network. It then installs
its malware payload and establishes C2 communications on newly infected devices.

SSLstrip Strategy
SSLstrip strips SSL encryption from a “secure” session. When a user connected to a
compromised Wi-Fi network attempts to initiate an SSL session, the modified access
point intercepts the SSL request.

With SSLstrip, the modified access point displays a fake padlock in the victim’s
web browser. Webpages can display a small icon called a favicon next to a website
address in the browser’s address bar. SSLstrip replaces the favicon with a padlock
that looks like SSL to an unsuspecting user.

Wi-Fi Attacks
There are different types of Wi-Fi attacks that hackers use to eavesdrop on
wireless network connections to obtain credentials and spread malware.

Doppelganger
Doppelganger is an insider attack that targets WPA3-Personal protected Wi-Fi
networks. The attacker spoofs the source MAC address of a device that is already
connected to the Wi-Fi network and attempts to associate with the same wireless
access point.

Cookie Guzzler
Muted Peer and Hasty Peer are variants of the cookie guzzler attack which exploit
the Anti-Clogging Mechanism (ACM) of the Simultaneous Authentication of Equals
(SAE) key exchange in WPA3-Personal.

Course Summary
Please ensure all knowledge check questions have been answered in order for this
course to be marked complete.

Now that you have completed this course, you should be able to:
Describe how bots and botnets work and explain the different types of botnets
Identify the latest cyberattack techniques
Describe how to defend against cyberattacks
Identify how spamming and phishing attacks are performed
Describe Wi-Fi vulnerabilities, attacks, and advanced persistent threats

3
of
4
Cyberattack Techniques
Knowledge Check

Introduction
Security Models and Perimeter-Based Security
Zero Trust Security Model and Implementation

Close

Network Security Prerequisites

Security Models and


Perimeter-Based Security

This lesson describes the core concepts of security models, the importance of
security models, and the functions of a perimeter-based security model.

Perimeter-Based Security Model


Perimeter-based network security models date back to the early mainframe era (circa
late 1950s), when large mainframe computers were located in physically secure
“machine rooms.” These rooms could be accessed by a limited number of remote job
entry (RJE) terminals directly connected to the mainframe in physically secure
areas.

Relies on Physical Security


Today’s data centers are the modern equivalent of machine rooms, but perimeter-
based physical security is no longer sufficient. Click the arrows for more
information about several obvious but important reasons for the security issues
associated with perimeter-based security.

Mainframe Computers

Mainframe computers predate the internet. In fact, mainframe computers predate


ARPANET, which predates the internet. Today, an attacker uses the internet to
remotely gain access, instead of physically breaching the data center perimeter.

Processing Power

The primary value of the mainframe computer was its processing power. The
relatively limited data that was produced was typically stored on near-line media,
such as tape. Today, data is the target. Data is stored online in data centers and
in the cloud, and it is a high-value target for any attacker.

Data Center

Data centers today are remotely accessed by millions of remote endpoint devices
from anywhere and at any time. Unlike the RJEs of the mainframe era, modern
endpoints (including mobile devices) are far more powerful than many of the early
mainframe computers and are themselves targets.

Assumes Trust on Internal Network


The primary issue with a perimeter-based network security strategy, which deploys
countermeasures at a handful of well-defined entrance and exit points to the
network, is that the strategy relies on the assumption that everything on the
internal network can be trusted. Click the tabs for more information about modern
business conditions and computing environments that perimeter-based strategies fail
to address.

“Internal” and “External” Distinction


Remote employees, mobile users, and cloud computing solutions blur the distinction
between “internal” and “external.”

Wireless Technologies
Wireless technologies, partner connections, and guest users introduce countless
additional pathways into network branch offices, which may be located in untrusted
countries or regions.

Insiders
Insiders, whether intentionally malicious or just careless, may present a very real
security threat.

Cyberthreats
Sophisticated cyberthreats could penetrate perimeter defenses and gain free access
to the internal network.

Stolen Credentials
Malicious users can gain access to the internal network and sensitive resources by
using the stolen credentials of trusted users.

Internal Networks
Internal networks are rarely homogeneous. They include pockets of users and
resources with different levels of trust or sensitivity, and these pockets should
ideally be separated (for example, research and development and financial systems
versus print or file servers).

Allows Unwanted Traffic


A broken trust model is not the only issue with perimeter-centric approaches to
network security. Another contributing factor is that traditional security devices
and technologies (such as port-based firewalls) commonly used to build network
perimeters let too much unwanted traffic through.

Click the tabs for more information about the typical shortcomings and inabilities
of perimeter-centric approaches.

Application Control
Encrypted Traffic
Identify Users
Protect Against Attacks
Net Result
Cannot definitively distinguish good applications from bad ones (which leads to
overly permissive access control settings)

2
of
3
Introduction
Zero Trust Security Model and Implementation

Introduction
Security Models and Perimeter-Based Security
Zero Trust Security Model and Implementation

Close

Network Security Prerequisites

Zero Trust Security Model


and Implementation

This lesson describes the Zero Trust security model design principles, the
principle of least privilege, and steps to configure and implement a Zero Trust
segmentation platform.

Zero Trust Security Model


The Zero Trust security model addresses some of the limitations of perimeter-based
network security strategies by removing the assumption of trust from the equation.

With a Zero Trust model, essential security capabilities are deployed in a way that
provides policy enforcement and protection for all users, devices, applications,
and data resources, as well as the communications traffic between them, regardless
of location.

No Default Trust
With Zero Trust there is no default trust for any entity – including users,
devices, applications, and packets – regardless of what it is and its location on
or relative to the enterprise network.

Monitor and Inspect


The need to "always verify" requires ongoing monitoring and inspection of
associated communication traffic for subversive activities (such as threats).
Compartmentalize
Zero Trust models establish trust boundaries that effectively compartmentalize the
various segments of the internal computing environment. The general idea is to move
security functionality closer to the pockets of resources that require protection.
In this way, security can always be enforced regardless of the point of origin of
associated communications traffic.

Benefits of the Zero Trust Model


In a Zero Trust model, verification that authorized entities are always doing only
what they’re allowed to do is not optional: It's mandatory. Click the tabs for more
information about the benefits of implementing a Zero Trust network.

Improved Effectiveness
Greater Efficiency
Improved Ability
Lower Total Cost of Ownership
Clearly improved effectiveness in mitigating data loss with visibility and safe
enablement of applications, plus detection and prevention of cyberthreats

Zero Trust Design Principles


The principle of least privilege in network security requires that only the
permission or access rights necessary to perform an authorized task are granted.

Core Zero Trust Principles


Security profiles are defined based on an initial security audit performed
according to Zero Trust inspection policies. Discovery is performed to determine
which privileges are essential for a device or user to perform a specific function.

Ensure Resource Access


Ensure Resource Access
Ensure that all resources are accessed securely, regardless of location. This
principle suggests the need for multiple trust boundaries and increased use of
secure access for communication to or from resources, even when sessions are
confined to the “internal” network. It also means ensuring that the only devices
allowed access to the network have the correct status and settings, have an
approved VPN client and proper passcodes, and are not running malware.

Enforce Access Control


Enforce Access Control
Adopt a least privilege strategy and strictly enforce access control. The goal is
to minimize allowed access to resources to reduce the pathways available for
malware and attackers to gain unauthorized access.

Inspect and Log All Traffic


Inspect and Log All Traffic
This principle reiterates the need to “always verify” while also reinforcing that
adequate protection requires more than just strict enforcement of access control.
Close and continuous attention must also be given to exactly what “allowed”
applications are actually doing, and the only way to accomplish these goals is to
inspect the content for threats.

Zero Trust Architecture


The Zero Trust model identifies a protect surface made up of the network’s most
critical and valuable data, assets, applications, and services (DAAS). Protect
surfaces are unique to each organization. Because the protect surface contains only
what’s most critical to an organization’s operations, the protect surface is orders
of magnitude smaller than the attack surface–and always knowable.

Identify the Traffic


With an understanding of the interdependencies among an organization's DAAS,
infrastructure, services, and users, the security team should put controls in place
as close to the protect surface as possible, creating a micro-perimeter around it.
This micro-perimeter moves with the protect surface, wherever it goes.

Zero Trust Segmentation Platform


The Zero Trust segmentation platform (also called a network segmentation gateway by
Forrester Research) is the component used to define internal trust boundaries. That
is, the platform provides the majority of the security functionality needed to
deliver on the Zero Trust operational objectives. Click the tabs for more
information about the abilities of the segmentation platform.

Secure

Control

Monitor

Conceptual Architecture
With the protect surface identified, security teams can identify how traffic moves
across the organization in relation to the protect surface. Understanding who the
users are, which applications they are using, and how they are connecting is the
only way to determine and enforce policy that ensures secure access to data. Click
the arrows for more information about the main components of a Zero Trust
conceptual architecture.

Fundamental Assertions

There are fundamental assertions about Zero Trust:

The network is always assumed to be hostile.


External and internal threats exist on the network at all times.
Network locality is not sufficient for deciding trust in a network.
Every device, user, and network flow is authenticated and authorized.
Policies must be dynamic and calculated from as many sources of data as possible.
Single Component

In practice, the Zero Trust segmentation platform is a single component in a single


physical location. Because of performance, scalability, and physical limitations,
an effective implementation is more likely to entail multiple instances distributed
throughout an organization’s network. The solution also is called a “platform” to
reflect that it is made up of multiple distinct (and potentially distributed)
security technologies that operate as part of a holistic threat protection
framework to reduce the attack surface and correlate information about discovered
threats.

Management Infrastructure
Centralized management capabilities are crucial to enabling efficient
administration and ongoing monitoring, particularly for implementations involving
multiple distributed Zero Trust segmentation platforms. A data acquisition network
also provides a convenient way to supplement the native monitoring and analysis
capabilities for a Zero Trust segmentation platform. Session logs that have been
forwarded to a data acquisition network can then be processed by out-of-band
analysis tools and technologies intended, for example, to enhance network
visibility, detect unknown threats, or support compliance reporting.

Click the image to enlarge it.

Zero Trust Conceptual Architecture


Zero Trust Conceptual Architecture
Traditional security models identify areas where breaches and exploits may occur,
the attack surface, and you attempt to secure the entire surface. Unfortunately, it
is often difficult to identify the entire attack surface. Unauthorized
applications, devices, and misconfigured infrastructure can expand that attack
surface without your knowledge.

With the protect surface identified, you can identify how traffic moves across the
organization in relation to the protect surface. Understanding who the users are,
which applications they are using, and how they are connecting is the only way to
determine and enforce policy that ensures secure access to your data. With an
understanding of the interdependencies between the DAAS, infrastructure, services,
and users, you should put controls in place as close to the protect surface as
possible, creating a micro-perimeter around it. This micro-perimeter moves with the
protect surface, wherever it goes.

In the Zero Trust model, only known and permitted traffic is granted access to the
protect surface. A segmentation gateway, typically a next-generation firewall,
controls this access. The segmentation gateway provides visibility into the traffic
and users attempting to access the protect surface, enforces access control, and
provides additional layers of inspection. Zero Trust policies provide granular
control of the protect surface, making sure that users have access to the data and
applications they need to perform their tasks but nothing more. This is known as
least privilege access.

Zero Trust Least Privilege Access Mode


Additionally, to implement a Zero Trust least privilege access model in the
network, the firewall must. Click the tabs for more information about Zero Trust
least privilege access model.

Have Visibility of and Control Over the Applications and their Functionality in the
Traffic

Be able to Allow Specific Applications and Block Everything else

Dynamically Define Access to Sensitive Applications and Data Based on a User’s


Group Membership

Dynamically Define Access from Devices or Device Groups to Sensitive Applications


and Data and From Users and User Groups to Specific Devices

Be able to Validate a User’s Identity Through Authentication

Dynamically Define the Resources that are Associated with the Sensitive Data or
Application

Control Data by File Type and Content

Zero Trust Segmentation Platform

Trust Zones

Zero Trust Capabilities


The core of any Zero Trust network security architecture is the Zero Trust
Segmentation Platform, so you must choose the correct solution. Key criteria and
capabilities to consider when selecting a Zero Trust Segmentation Platform include.

Criteria and Capabilities


Click the arrows for more information about the key criteria and capabilities to
consider when selecting a Zero Trust segmentation platform.

Secure Access

Consistent secure IPsec and SSL VPN connectivity is provided for all employees,
partners, customers, and guests wherever they’re located (for example, at remote or
branch offices, on the local network, or over the internet). Policies to determine
which users and devices can access sensitive applications and data can be defined
based on application, user, content, device, device state, and other criteria.

Inspection of All Traffic

Application identification accurately identifies and classifies all traffic,


regardless of ports and protocols, and evasive tactics, such as port hopping or
encryption. Application identification eliminates methods that malware may use to
hide from detection and provides complete context into applications, associated
content, and threats.

Least Privileges Access Control

The combination of application, user, and content identification delivers a


positive control model that allows organizations to control interactions with
resources based on an extensive range of business-relevant attributes, including
the specific application and individual functions being used, user and group
identity, and the specific types or pieces of data being accessed (such as credit
card or Social Security numbers). The result is truly granular access control that
safely enables the correct applications for the correct sets of users while
automatically preventing unwanted, unauthorized, and potentially harmful traffic
from gaining access to the network.

Cyberthreat Protection

A combination of anti-malware, intrusion prevention, and cyberthreat prevention


technologies provides comprehensive protection against both known and unknown
threats, including threats on mobile devices. Support for a closed-loop, highly
integrated defense also ensures that inline enforcement devices and other
components in the threat protection framework are automatically updated.

Coverage for All Security Domains

Virtual and hardware appliances establish consistent and cost-effective trust


boundaries throughout an organization’s network, including in remote or branch
offices, for mobile users, at the internet perimeter, in the cloud, at ingress
points throughout the data center, and for individual areas wherever they might
exist.

Zero Trust Implementation


Implementation of a Zero Trust network security model doesn’t require a major
overhaul of an organization’s network and security infrastructure.

A Zero Trust design architecture can be implemented with only incremental


modifications to the existing network, and implementation can be completely
transparent to users. Advantages of such a flexible, non-disruptive deployment
approach include minimizing the potential impact on operations and being able to
spread the required investment and work effort over time.

Configure Listen-Only Mode


Configure Listen-Only Mode
Define Zero Trust Zones
Define Zero Trust Zones
Establish Zero Trust Zones
Establish Zero Trust Zones
Implement at Major Access Points
Implement at Major Access Points
3
of
3
Security Models and Perimeter-Based Security

Introduction
Cybercrime and Security Threats
Implementing a Prevention Architecture
Knowledge Check

Close

Cybersecurity Fundamentals
Cybercrime and
Security Threats

This lesson describes the evolution of cybercrime and security threats, the impact
of security breaches on organizations, and the role of employees in exposing
critical data.

Understanding Cybercrime and Security Threats


Cybercrime and security threats continue to evolve, challenging organizations to
keep up as network boundaries and attack surfaces expand. Security breaches and
intellectual property loss can have a huge impact on organizations.

Security Approaches and Challenges


Current approaches to security, which focus mainly on detection and remediation, do
not adequately address the growing volume and sophistication of attacks. Click the
tabs for more information about the main challenges with the current security
approach.

Automation and Big Data Analytics


Decentralization of IT Infrastructure
Traditional Security Products
Complex Network
Cybercriminals leverage automation and big data analytics to execute massively
scalable and increasingly effective attacks against their targets. They often share
data and techniques with other threat actors to keep their approach ahead of point
security products. Cybercriminals are not the only threat: Employees may often
unknowingly violate corporate compliance and expose critical data in locations such
as the public cloud.

Prevention Architecture
The product portfolio's prevention architecture allows organizations to reduce
threat exposure by first enabling applications for all users or devices in any
location and then preventing threats within application flows, tying application
use to user identities across physical, cloud-based, and software-as-a-service
(SaaS) environments.

Provide Full Visibility


Provide Full Visibility
Reduce the Attack Surface
Reduce the Attack Surface
Prevent All Known Threats, Fast
Prevent All Known Threats, Fast
Detect and Prevent New, Unknown Threats with Automation
Detect and Prevent New, Unknown Threats with Automation
2
of
4
Introduction
Implementing a Prevention Architecture

Introduction
Cybercrime and Security Threats
Implementing a Prevention Architecture
Knowledge Check
Close

Cybersecurity Fundamentals

Implementing a
Prevention Architecture

This lesson describes Palo Alto Networks prevention-first architecture, which


focuses on preventing attacks through continuous innovation in artificial
intelligence, analytics, automation, and orchestration. It also describes three
essential areas of cybersecurity strategy: Secure the Enterprise with Strata,
Secure the Cloud with Prisma, and Secure the Future with Cortex.

Prevention-First Architecture
Palo Alto Networks is helping to address the world’s greatest security challenges
with continuous innovation that seizes the latest breakthroughs in artificial
intelligence, analytics, automation, and orchestration. By delivering an integrated
platform and empowering a growing ecosystem of partners, Palo Alto Networks is at
the forefront of protecting tens of thousands of organizations across clouds,
networks, and mobile devices.

The Palo Alto Networks portfolio of security technologies and solutions addresses
three essential areas of cybersecurity strategy.

Secure the Enterprise with Strata


Prevent attacks with the industry-leading network security suite, which enables
organizations to embrace network transformation while consistently securing users,
applications, and data, no matter where they reside.

PAN-OS®
PAN-OS® software runs Palo Alto Networks® next-generation firewalls. PAN-OS
natively uses key technologies (App-ID, Content-ID, Device-ID, and User-ID) to
provide complete visibility and control of applications in use across all users,
devices, and locations all the time. Inline ML and application and threat
signatures automatically reprogram the firewall with the latest intelligence so
allowed traffic is free of known and unknown threats.

Panorama
Panorama network security management enables centralized control, log collection,
and policy workflow automation across all next-generation firewalls (scalable to
tens of thousands of firewalls) from a single pane of glass.

Cloud-Based Subscription Services


Cloud-based subscription services, including DNS Security, URL Filtering, Threat
Prevention, and WildFire® malware prevention, deliver real-time advanced predictive
analytics, AI and machine learning, exploit/malware/C2 threat protection, and
global threat intelligence to the Palo Alto Networks Security Operating Platform.

Secure the Cloud with Prisma


The Prisma suite secures public cloud environments, SaaS applications, internet
access, mobile users, and remote locations through a cloud-delivered architecture.
It is a comprehensive suite of security services to effectively predict, prevent,
detect, and automatically respond to security and compliance risks without creating
friction for users, developers, and security and network administrators. Click the
arrows for more information about the core components of Prisma.

Prisma Cloud

Prisma Cloud is the industry’s most comprehensive threat protection, governance,


and compliance offering. It dynamically discovers cloud resources and sensitive
data across AWS, GCP, and Azure to detect risky configurations and identify network
threats, suspicious user behavior, malware, data leakage, and host vulnerabilities.
It eliminates blind spots across cloud environments and provides continuous
protection with a combination of rule-based security policies and class-leading
machine learning.

Prisma Access

Prisma Access is a Secure Access Service Edge (SASE) platform that helps
organizations deliver consistent security to their remote networks and mobile
users. It’s a generational step forward in cloud security, using a cloud-delivered
architecture to connect all users to all applications. All of an organization's
users, whether at headquarters, in branch offices, or on the road, connect to
Prisma Access to safely use cloud and data center applications, as well as the
internet. Prisma Access consistently inspects all traffic across all ports and
provides bidirectional software-defined wide-area networking (SD-WAN) to enable
branch-to-branch and branch-to-headquarters traffic.

Prisma SaaS

Prisma SaaS functions as a multimode cloud access security broker (CASB), offering
inline and API-based protection working together to minimize the range of cloud
risks that can lead to breaches. With a fully cloud-delivered approach to CASB,
organizations can secure their SaaS applications through the use of inline
protections to safeguard inline traffic with deep application visibility,
segmentation, secure access, and threat prevention, as well as API-based
protections to connect directly to SaaS applications for data classification, data
loss prevention, and threat detection.

Secure the Future with Cortex


Cortex is designed to simplify security operations and considerably improve
outcomes. Cortex is enabled by the Cortex Data Lake, where customers can securely
and privately store and analyze large amounts of data that is normalized for
advanced AI and machine learning to find threats and orchestrate responses quickly.

Click the tabs for more information about the core components of Cortex.

Cortex XDR
Cortex XSOAR
Cortex Data Lake
AutoFocus
Cortex XDR breaks the silos of traditional detection and response by natively
integrating network, endpoint, and cloud data to stop sophisticated attacks. Taking
advantage of machine learning and AI models across all data sources, it identifies
unknown and highly evasive threats from managed and unmanaged devices.

Course Summary
Please ensure all knowledge check questions have been answered in order for this
course to be marked complete.

Now that you have completed this course, you should be able to:

Describe cybercrime and its impact on organizations

Identify capabilities of the Palo Alto Networks prevention-first architecture

3
of
4
Cybercrime and Security Threats
Knowledge Check

Introduction
Virtualization, Containers, and Micro-VMs
Serverless Technology
Knowledge Check

Close

Cloud Security Fundamentals

Virtualization, Containers,
and Micro-VMs

This lesson describes how cloud native technologies are based on the concepts of
virtualization. It also describes containers and micro-VMs.

Cloud Native Technology Properties


A useful way to think of cloud native technologies is as a continuum spanning from
virtual machines (VMs) to containers to serverless.

On one end are traditional VMs operated as stateful entities, as we’ve done for
over a decade now. On the other are completely stateless, serverless apps that are
effectively just bundles of app code without any packaged accompanying operating
system (OS) dependencies.

Cloud Native Technology Properties


The Cloud Native Computing Foundation’s (CNCF) charter defines three properties of
cloud native technologies.
Container Packaged
Container Packaged
Dynamically Managed
Dynamically Managed
Microserviced
Microserviced
Important Terminology
Let's review some important terminology that will be used in this lesson.

Click each tab to read the important terminology in this lesson.

Hypervisor

Native

Hosted

Virtualization
Virtualization is the foundation of cloud computing. You can use virtualization to
create multiple virtual machines to run on one physical host computer.

You can think of virtual machines as separate computers running various operating
systems on a physical host computer. Virtual machines and their associated
operating systems often are referred to as “virtual guest operating systems.” These
virtual guest operating systems all share the physical compute resources:
processors, dynamic memory (RAM), and permanent storage media of a physical host
machine.

Hypervisor
Hypervisor software allows multiple, virtual guest operating systems to run
concurrently on a single physical host computer. The hypervisor functions between
the computer operating system and the hardware kernel.

Click the images for more information about the two types of hypervisors.

Security Considerations
Virtualization is an important technology used in data centers and cloud computing
to optimize resources. Click each tab for more information about important security
considerations associated with virtualization.

Dormant VMs
Hypervisor Vulnerabilities
Intra-VM Communications
VM Sprawl
In many data center and cloud environments, inactive VMs are routinely (often
automatically) shut down when they are not in use. VMs that are shut down for
extended periods of time (weeks or months) may be inadvertently missed when anti-
malware updates and security patches are applied.

Containers
A container is a package of software that allows applications to run independently
within a host operating system.

Video: Introduction to Containers


Watch the video for more information about the purpose of containers, container
architecture and engine availability, and the cybersecurity issues that come with
deploying containers.

Elapsed time0:00/Total2:34

Container Orchestration
Kubernetes is an open-source orchestration platform that provides an application
programming interface (API) that enables developers to define container
infrastructure in a declarative fashion, that is, infrastructure as code (IaC).
Click the tabs for more information about application development using containers
and microservices.

Kubernetes

Microservices

Containers as a Service
As containers grew in popularity and use diversified, orchestrators such as
Kubernetes (and its derivatives such as OpenShift), Mesos, and Docker Swarm became
increasingly important to deploy and operate containers at scale. Containers-as-a-
service (CaaS) platforms manage the underlying compute, storage, and network
hardware by default and, although assembled from many more generic components, are
highly optimized for container workloads. Click the tabs for more information about
why orchestrators such as Kubernetes (and its derivatives such as OpenShift),
Mesos, and Docker Swarm are difficult to operate at scale.

Complex to Set Up and Maintain


Difficult to Manage
Although these orchestrators abstract much of the complexity required to deploy and
operate large numbers of microservices comprising many containers and running
across many hosts, they can be complex to set up and maintain.

Hypervisors Versus Docker Containers


There are significant differences between hypervisors and containers. In brief,
hypervisors abstract hardware and allow you to run operating systems. Containers
abstract the operating system to enable you to run applications.

Hypervisor
In the virtualized deployment, there is hardware, an operating system, a hypervisor
that abstracts each virtual machine from the base OS, and (guest) virtual machines
that have full operating systems installed in them with their respective libraries
and applications.

Docker Container
Containers allow Dev teams to package apps and services in a standard and simple
way. Containers can run anywhere and be moved easily. Docker containers are the
most common. Docker is a tool used by developers to package together dependencies
into a single container (or image). This means that in order to use your
integration, you are not required to "pip install" all of the required packages.
They are part of a container that "docks" to the server and contains all the
libraries you need. Pip and conda are the two most popular ways to install and
manage python packages on containers.

Micro-VMs
Micro-VMs are scaled-down, lightweight virtual machines that run on hypervisor
software. Micro-VMs contain only the Linux operating system kernel features
necessary to run a container.

Click the down arrows for more information about the importance of micro-VMs and
what they provide.

Why Micro-VMs?

For some organizations, especially large enterprises, containers provide an


attractive app deployment and operational approach but lack sufficient isolation to
mix workloads of varying sensitivity levels. Regardless of recently discovered
hardware flaws such as Meltdown and Spectre, VMs provide a much stronger degree of
isolation but at the cost of increased complexity and management burden. Micro-VMs
such as Kata containers, VMware vSphere Integrated Containers, and Amazon
Firecracker seek to accomplish this by providing a blend of a developer-friendly
API and abstraction of app from the OS while hiding the underlying complexities of
compatibility and security isolation within the hypervisor.

Characteristics Of the Various Cloud Providers


AWS, Azure, and GCP have many different services that fit into the categories of
compute, storage, database, and networking.

AWS Basic Cloud Infrastructure


The image below depicts popular AWS services that you should be familiar with.

Azure Basic Cloud Infrastructure


The illustration depicts popular Azure services that you should be familiar with.

GCP Basic Cloud Infrastructure


The illustration depicts popular GCP services that you should be familiar with.

2
of
4
Introduction
Serverless Technology

Introduction
Virtualization, Containers, and Micro-VMs
Serverless Technology
Knowledge Check

Close
Cloud Security Fundamentals

Serverless
Technology

This lesson describes serverless computing and why it is a growing segment of cloud
computing.

Important Terminology
Let's review some important terminology that will be used in this lesson.

Click each tab to read the important terminology in this lesson.

Hypervisor

Native

Hosted

Serverless Computing and Function as a Service


Serverless architectures, also referred to as function as a service (FaaS), enable
organizations to build and deploy software and services without maintaining or
provisioning any physical or virtual servers. Applications made using serverless
architectures are suitable for a wide range of services and can scale elastically
as cloud workloads grow.

Benefits of Using Serverless Computing and FaaS


Here are some of the major benefits of using serverless computing and FaaS.

Focus on Core Product Functionality


From a software development perspective, organizations adopting serverless
architectures can focus on core product functionality and completely disregard the
underlying operating system, application server, or software runtime environment.

Not Responsible for Security Patches


By developing applications using serverless architectures, users relieve themselves
of the daunting task of continually applying security patches for the underlying
operating system and application servers. Instead, these tasks are now the
responsibility of the serverless architecture provider.

Secure Data Center, Network, and Servers


In serverless architectures, the serverless provider is responsible for securing
the data center, network, servers, operating systems, and their configurations.
However, application logic, code, data, and application-layer configurations still
need to be robust and resilient to attacks. These are the responsibility of
application owners.

The image below shows each responsibility of the application owner and the FaaS
provider.

Adopting a Serverless Model


Click the cards for more information about how adopting a serverless model can
impact application development.
Serverless App Package and Environment
While on-demand containers greatly reduce the “surface area” exposed to end users
and, thus, the complexity associated with managing them, some users prefer an even
simpler way to deploy their apps. Serverless is a class of technologies designed to
allow developers to provide only their app code to a service, which then
instantiates the rest of the stack below it automatically.

App Package
In serverless apps, the developer only uploads the app package itself, without a
full container image or any OS components. The platform dynamically packages it
into an image, runs the image in a container, and (if needed) instantiates the
underlying host OS and VM as well as the hardware required to run them. In a
serverless model, users make the most dramatic trade-offs of compatibility and
control for the simplest, most efficient deployment and management experience.

Serverless Environment
Examples of serverless environments include Amazon Lambda and Azure Functions.
Arguably, many platform-as-a-service (PaaS) offerings, such as Pivotal Cloud
Foundry, are also effectively serverless even if they have not historically been
marketed as such. While on the surface, serverless may appear to lack the
container-specific, cloud-native attribute, containers are extensively used in the
underlying implementations, even if those implementations are not exposed to end
users directly.

Issues with Serverless Architecture


Serverless architectures introduce a new set of issues that must be considered when
securing such applications, including:

Increased Attack Surface


Increased Attack Surface

Serverless functions consume data from a wide range of event sources, such as
HyperText Transfer Protocol (HTTP) application program interfaces (APIs), message
queues, cloud storage, Internet of Things (IoT) device communications, and so
forth. This diversity increases the potential attack surface dramatically,
especially when messages use protocols and complex message structures. Many of
these messages cannot be inspected by standard application-layer protections, such
as web application firewalls (WAFs).

Attack Surface Complexity


Attack Surface Complexity

The attack surface in serverless architectures can be difficult for some to


understand, given that such architectures are still somewhat new. Many software
developers and architects have yet to gain enough experience with the security
risks and appropriate security protections required to secure such applications.
Overall System Complexity
Overall System Complexity

Visualizing and monitoring serverless architectures is still more complicated than


standard software environments.

Inadequate Security Testing


Inadequate Security Testing

Performing security testing for serverless architectures is more complex than


testing standard applications, especially when such applications interact with
remote third-party services or with backend cloud services, such as Non-Structured
Query Language (NoSQL) databases, cloud storage, or stream processing services.
Additionally, automated scanning tools are currently not adapted to examining
serverless applications.

Traditional Security Protections (firewall, web application firewall (WAF),


intrusion prevention system (IPS)/intrusion detection system (IDS))
Traditional Security Protections

(Firewall, Web Application Firewall (WAF), Intrusion Prevention System


(IPS)/Intrusion Detection System (IDS))

Since organizations that use serverless architectures do not have access to the
physical (or virtual) server or its operating system, they cannot deploy
traditional security layers, such as endpoint protection, host-based intrusion
prevention, WAFs, and so forth. Additionally, existing detection logic and rules
have yet to be “translated” to support serverless environments.

Common Scanning Tools


Common scanning tools currently include the following:

Dynamic Application Security Testing (DAST)


DAST tools will only provide testing coverage for HTTP interfaces. This limited
capability poses a problem when testing serverless applications that consume input
from non-HTTP sources or interact with backend cloud services.
Many DAST tools inadequately test web services—for example, RESTful APIs that don’t
follow the classic HTML/HTTP request/response model and request format.
Static Application Security Testing (SAST)
SAST tools rely on data-flow analysis, control flow, and semantic analysis to
detect vulnerabilities in software. This is because serverless applications contain
multiple distinct functions that are stitched together using event triggers and
cloud services
For example, message queues, cloud storage, or NoSQL databases. These statically
analyze data flow and, in such scenarios, are highly prone to false positives.
Conversely, SAST tools will suffer from false negatives as well because source/sink
rules in many tools do not consider FaaS constructs. These rulesets will need to
evolve to provide proper support for serverless applications.
Interactive Application Security Testing (IAST)
IAST tools have better odds at accurately detecting vulnerabilities in serverless
applications when compared to both DAST and SAST.
Similar to DAST tools, their security coverage is impaired when serverless
applications use non-HTTP interfaces to consume input.
IAST solutions require that the tester deploy an instrumentation agent on the
local machine, which is not an option in serverless environments.
Course Summary
Now that you have completed this lesson, you should be able to:

Describe virtualization, containers, and micro-VMs

Describe the characteristic of cloud service providers

Describe serverless computing and its benefits

3
of
4
Virtualization, Containers, and Micro-VMs
Knowledge Check

Introduction
Cloud Computing Models
and Responsibilities
The Hybrid Cloud
Knowledge Check

Close

Cloud Security Fundamentals

Cloud Computing Models


and Responsibilities

This lesson provides an overview of key cloud computing concepts including the
primary cloud models, traditional versus cloud solutions, the concept of shared
responsibility and the overall benefits of moving to cloud computing.

What is Cloud Computing?


Cloud computing is not a location but rather a pool of resources that can be
rapidly provisioned in an automated, on-demand manner. Read the quote below for the
definition of cloud computing according to the U.S. National Institute of Standards
and Technology.

The Benefits of Cloud Computing


There are multiple benefits of transitioning to cloud computing. One value is the
ability to pool resources to achieve economies of scale. This ability to pool
resources is true for private or public clouds. Another value of cloud computing is
the ability to be more agile. Instead of having many independent and often under-
used servers deployed for your enterprise applications, pools of resources are
aggregated, consolidated, and designed to be elastic enough to scale with the needs
of your organization.

Here are some other benefits:


Segmented Administration
Segmented Administration

Different organizations (or customers or business units) can control (and monitor)
a separate firewall instance so that they have control over their own traffic
without interfering with the traffic or policies of another firewall instance on
the same physical firewall.

Scalability
Scalability

After the physical firewall is configured, adding or removing customers or business


units can be done efficiently. An ISP, managed security service provider, or
enterprise can provide different security services to each customer.

Reduced Expenses
Reduced Capital and Operational Expenses

Virtual systems eliminate the need to have multiple physical firewalls at one
location because virtual systems co-exist on one firewall. By not having to
purchase multiple firewalls, an organization can save on the hardware expense,
electric bills, and rack space, and can reduce maintenance and management expenses

Ability to share mappings


Ability to Share IP-Address-to-Username Mappings

By assigning a virtual system as a User-ID hub, you can share the IP-address-to-
username mappings across virtual systems to leverage the full User-ID capacity of
the firewall and reduce operational complexity.

Important Terminology
Let's review the important terminology that will be used in this lesson.

Click each tab to read the important terminology in this lesson.

Identity and Access Management (IAM)

Technical Debt

Distributed Workforce

Cloud Cybersecurity Infrastructure

On-Premises

Role-Based Access Control (RBAC)

Click each tab to read the key terminologies in this lesson.


DevOps

Operating System (OS)

Virtual Machine

App Software

Runtime

Shift-Left

Cloud Computing Ecosystem


The cloud computing ecosystem consists of service models, deployment models,
responsibilities, and security challenges.

Service Models, Deployment Models, and Responsibilities


Virtualization is a critical component of a cloud computing architecture that, when
combined with software orchestration and management tools that are covered in this
course, allows you to integrate disparate processes so that they can be automated,
easily replicated, and offered on an as-needed basis.

Cloud Computing Service Models


As data center managers face a burgeoning population of mobile users, the
distributed workforce – with multiple endpoints and cloud applications – is forcing
organizations to evolve both their in-house and cloud cybersecurity
infrastructures.

Three Computing Service Models


NIST defines three distinct cloud computing service models.

Click each card for more information about each cloud computing service model.

Cloud Computing Deployment Models


Data and applications now reside in a multitude of cloud environments – including
private and public clouds – spanning infrastructure, platform, and security.

Four Cloud Deployment Models


NIST defines four cloud computing deployment models. Click the arrows for more
information about each cloud computing model.

Public Cloud

Public cloud is a cloud infrastructure that is open to use by the general public.
It’s owned, managed, and operated by a third party (or parties), and it exists on
the cloud provider’s premises. Examples of public CSPs are Amazon Web Services
(AWS), Google Cloud, and Microsoft Azure.

Community Cloud

Community cloud is a cloud infrastructure that is used exclusively by a specific


group of organizations.

Private Cloud

Private cloud is a cloud infrastructure that is used exclusively by a single


organization. It may be owned, managed, and operated by the organization or a third
party (or a combination of both), and it may exist on-premises or off-premises.

Hybrid Cloud

Hybrid cloud is a cloud infrastructure that comprises two or more of these


deployment models and is, therefore, the best of both worlds: private data center
for static, older workloads and public cloud for newer apps, agility, and
scalability.

Shared Responsibility Model


The security risks that threaten your network today do not change when you move
from on-premises to the cloud. The shared responsibility model defines who
(customer and/or provider) is responsible for what (related to security) in the
public cloud.

Security Responsibility Ownership By Model


In general terms, the cloud provider is responsible for security of the cloud,
including the physical security of the cloud data centers, and foundational
networking, storage, compute, and virtualization services. The cloud customer is
responsible for security in the cloud, which is further delineated by the cloud
service model. Click the arrows for more information about what the cloud customer
is responsible for.

Multi-Tenancy Cloud Environments


In multi-tenancy (multiple customers of a cloud vendor are using the same computing
resources) cloud environments, particularly in SaaS models, the customer controls
and resources are limited by the cloud provider.

Network Security vs. Cloud Security


With the use of cloud computing technologies, your data center environment can
evolve from a fixed environment where applications run on dedicated servers toward
an environment that is dynamic and automated.

Dynamic Environment
In a dynamic environment, pools of computing resources are available to support
application workloads that can be accessed anywhere, anytime, from any device.
Security remains a significant challenge when you embrace this new dynamic, cloud-
computing fabric environment. Many of the principles that make cloud computing
attractive may go against network security best practices.

Network Security
Click the icons for more information about network security functionality and best
practices.

Cloud Security
Click the icons for more information about cloud security functionality and best
practices.

Securing the Cloud


As organization's transition from a traditional data center architecture to a
public, private, or hybrid cloud environment, enterprise security strategies must
be adapted to support changing requirements in the cloud.

Click the tabs for more information about the important requirements to secure the
cloud.

Consistent Security
Zero Trust Principles
Centralized Management
Shift-Left
Identity Management

The same levels of application control and threat prevention should be used to
protect both your cloud computing environment and your physical network. First, you
need to be able to confirm the identity of your applications, validating their
identity and forcing them to use only their standard ports. You also need to be
able to block the use of rogue applications while simultaneously looking for and
blocking misconfigured applications. Finally, application-specific threat
prevention policies should be applied to block both known and unknown malware from
moving into and across your network and cloud environment.

Cloud Security Best Practices


No matter which type of cloud service you use (IaaS, PaaS, SaaS), the
responsibility of securing certain types of workloads will always fall on you, not
on the cloud provider.

Click each tab to see how to maximize your cloud environment's security.

Review Default Settings

Adapt Data Storage and Authentication Configurations

Do Not Assume Your Cloud Data Is Safe


Integrate with Your Cloud's Data Retention Policy

Set Appropriate Privileges

Keep Cloud Software Updated

Build Security Policies and Best Practices into Cloud Images

Isolate Your Cloud Resources

2
of
4
Introduction
The Hybrid Cloud

Introduction
Cloud Computing Models
and Responsibilities
The Hybrid Cloud
Knowledge Check

Close

Cloud Security Fundamentals

The Hybrid
Cloud

This lesson describes the hybrid cloud and how organizations are using it to
transition to public clouds from traditional networks.

The Hybrid Cloud


Many organizations are using public cloud compute resources to expand private cloud
capacity rather than expand compute capacity in an on-premises private cloud data
center.

Virtualization of Guest Operating Systems


The use of private cloud and public cloud compute resources to expand services is
called the hybrid cloud. The virtualization of guest operating systems has forced
organizations to adopt a hybrid cloud model over a traditional data center.

Click each image for more information about the traditional data center and the
hybrid cloud.
Important Terminology
Let's read the important terminology that will be used in this lesson.

Click each tab to read the important terminology in this lesson.

Bolted-On Feature Sets

Contiguous Ports

Bursty Demand Load

Form Factor

Active/Passive Mode

Traditional Data Center Versus Hybrid Cloud


The ”ports first” traditional data center security solution limits the ability to
see all traffic on all ports. The move toward toward a cloud computing model –
private, public, or hybrid improves operational efficiencies.

Traditional Data Center Weaknesses


Click the tabs for more information about the weaknesses associated with
traditional data centers.

Limited Visibility and Control

No Concept of Unknown Traffic

No Policy Reconciliation Tools

Cumbersome Security Policy Update Process

Hybrid Cloud Strengths


Click the tabs for more information about the strengths associated with hybrid
clouds.

Optimizes Resources

Reduces Costs

Increases Operational Flexibility

Maximizes Efficiency

Private Cloud Traffic Types and Compute Clusters


Organizations usually implement security to protect traffic flowing north-south,
but this approach is insufficient for protecting east-west traffic within a private
cloud. To improve their security posture, enterprises must protect against threats
across the entire network, both north-south and east-west.

Virtual Data Center Design


In a virtual data center (private cloud), there are two different types of traffic,
each of which is secured in a different manner: north-south and east-west.

The compute cluster is the building block for hosting the application
infrastructure and provides the necessary resources in terms of compute, storage,
networking, and security.

The graphic is a typical virtual data center design architecture.

Click the icons for more information about the types of traffic and compute
clusters.

Private Cloud Security


The use of virtual firewalls for east-west protection provides unprecedented
traffic and threat visibility.

East-West Protection Benefits


Virtual data center security best practices dictate a combination of north-south
and east-west protection. Click the tabs for more information about the benefits
that east-west protection provides.

Authorizes Allowed Apps


Reduces Lateral Threats
Stops Threats
Protects from Data Theft
Authorizes only allowed applications to flow inside the data center, between VMs.

Traffic and Threat Visibility


An added benefit of using virtual firewalls for east-west protection is the
unprecedented traffic and threat visibility that the virtualized security device
can now provide. After Traffic logs and Threat logs are turned on, VM-to-VM
communications and malicious attacks become visible. This virtual data-center
awareness allows security teams to optimize policies and enforce cyberthreat
protection (for example, IPS, anti-malware, file blocking, data filtering, and DoS
protection) where needed.

Hybrid Cloud Security Evolution


Security in a hybrid cloud evolves incrementally in four phases.

Phased Approach
The following approach to security in the evolving data center – from traditional
three-tier architectures to virtual data centers and to the cloud – aligns with
practical realities, such as the need to leverage existing best practices and
technology investments, and the likelihood that most organizations will transform
their data centers incrementally. This approach consists of four phases. Click each
icon for more information about each phase.

Course Summary
Now that you have completed this lesson, you should be able to:
Describe cloud computing models

Describe the benefits of cloud computing

Describe the shared responsibility model

Describe some cloud security best practices

Describe the hybrid cloud and how it differs from traditional data centers

Describe the the four phases of hybrid cloud transition

3
of
4
Cloud Computing Models
and Responsibilities
Knowledge Check

Introduction
Application Development Platforms and Processes
Security Operations Responsibilities
Knowledge Check

Close

Cloud Security Fundamentals

Application Development
Platforms and Processes

This lesson describes how cloud security is integrated into organizations and their
processes.

Cloud Native Security Platform (CNSP)


The cloud native approach takes the best of what cloud has to offer – scalability,
deployability, manageability, and limitless on-demand compute power – and applies
these principles to software development, combined with CI/CD automation, to
radically increase productivity, business agility, and cost savings.

Continuous Integration/Continuous Delivery (CI/CD)


Application development methodologies are moving away from the traditional
“waterfall” model toward more agile continuous integration/continuous delivery
(CI/CD) processes with end-to-end automation.

The benefits and challenges of CI/CD process are:

Benefits
Benefits

CI/CD is a new approach that offers a multitude of benefits, such as shorter time
to market and more efficient software delivery.

Challenges
Challenges

Traditional security methodologies weren’t designed to address these modern


application workflows. As development teams embrace cloud native technologies,
security teams have challenges maintaining pace.

Limited prevention controls, poor visibility, and tools that lack automation yield
incomplete security analytics. These factors increase the risk of compromise and
the likelihood of successful breaches in cloud environments. Meanwhile, the demand
for an entirely new approach to security emerges.

Cloud Native Architectures


Cloud native architectures consist of cloud services such as containers, serverless
security, platform as a service (PaaS), and microservices.

These services are loosely coupled, which means they are not hardwired to any
infrastructure components, thus allowing developers to make changes frequently
without affecting other pieces of the application or other team members’ projects
across technology boundaries such as public, private, and multicloud deployments.

"Cloud native” refers to a methodology of software development that is essentially


designed for cloud delivery and exemplifies all the benefits of the cloud by
nature.

DevOps, SecOps, and DevSecOps


Cloud native security point products that began to appear in the market were
engineered to address one part of the problem or one segment of the software stack,
but on their own they could not collect enough information to accurately understand
or report on the risks across cloud native environments. This situation forced
security teams to use multiple tools and vendors, which increased cost, complexity,
and risk and also created blind spots where the tools overlapped but didn’t
integrate. The solution to this problem requires a unified platform approach that
can envelop the entire CI/CD lifecycle and integrate with the DevOps workflow.

DevOps, SecOps, and DevSecOps overlap in some areas but there are distinct
differences in the roles they play in the CI/CD process.

DevOps
DevOps

DevOps teams are a collaboration between the development teams and IT operations.
Traditionally, IT operations did not understand the specific technical and process
requirements of the software development process. DevOps teams have a closer
relationship with software development teams in order to facilitate the release of
applications.
SecOps
SecOps

SecOps team are essentially IT operations team with a focus on security.


Historically, IT operations and security were separate teams. SecOps teams
directly integrate security into the IT operations.

DevSecOps
DevSecOps

DevSecOps teams have a more specific focus on ensuring security than DevOps and
SecOps teams. They focus on applying application and infrastructure security
automation and processes across the CI/CD pipeline.

CNSP Functionality
CNSPs share context about infrastructure, PaaS, users, development platforms, data,
and application workloads across platform components to enhance security.

Click the cards for more information about the functions of CNSPs.

Compute Options
Just as cloud-native approaches have fundamentally changed how the cloud is used,
CNSPs are fundamentally restructuring how the cloud is secured.

Click the tabs for more information about how organizations traditionally embrace
compute options and which kind of coverage CNSPs can provide organizations today.

Traditional Compute Option Coverage

Modern Compute Option Coverage

Security and Cloud Application Development


Application development in the cloud often follows the CI/CD process and, in
optimal situations, security operations is integrated into this workflow. A CI/CD
pipeline, sometimes also called a DevOps pipeline, is the workflow when the
processes that go into delivering software are integrated. When code flows smoothly
and automatically from one CI/CD process into another, you have a CI/CD pipeline.

The pipeline is sometimes represented as a loop because teams can use feedback from
the other stages to plan their next set of code changes. This practice helps
achieve the DevOps goal of continuous improvement.
Note the iterative process by which development teams on the left loop create and
package the software and operations teams on the right loop in order to release and
then monitor it.

DevOps Software Development Model


The DevOps software development model is enhancing or replacing the traditional
software development life cycle (SDLC) model.

In the traditional software development model, developers write large amounts of


code for new features, products, bug fixes, and such, and then pass their work to
the operations team for deployment, usually via an automated ticketing system. The
operations team receives this request in its queue, tests the code, and gets it
ready for production – a process that can take days, weeks, or even months.

Important Characteristics of DevOps


DevOps unites the development and operations teams throughout the entire software
delivery process, enabling them to discover and remediate issues earlier, automate
testing and deployment, and reduce time to market. The important characteristics
that comprise DevOps are:

Collaborative Teams
Two separate teams (development and operations) operate in a communicative and
collaborative way.

Culture
DevOps refers to a culture where developers, testers, and operations personnel
cooperate throughout the entire software delivery lifecycle.

Strategy
Although there are tools that work well with a DevOps model or help promote DevOps
culture, DevOps is ultimately a strategy, not a tool.

More Than Automation


Although automation is very important for DevOps culture, automation alone does not
define DevOps.

Prioritizing Software Security in the Cloud


The customer is ultimately responsible for providing security for the data, hosts,
containers, and serverless instances in the cloud.

Public cloud service providers have done a great job with the build, maintenance,
and updating of computing hardware, virtual machines, data storage, and databases
along with the minimum baseline security protection mechanisms. However, the
customer is ultimately responsible for providing security for the data, hosts,
containers, and serverless instances in the cloud. Customers should follow three
DevOps models and processes to better secure their data in the cloud.

DevOps CI/CD Pipeline


DevOps is a cycle of continuous integration and continuous delivery (or continuous
deployment), otherwise known as the CI/CD pipeline.

The CI/CD pipeline integrates development and operations teams to improve


productivity by automating infrastructure and workflows and by continuously
measuring application performance. Click the tabs for more information about the
important characteristics of DevOps continuous integration, delivery, and
deployment.
Continuous Integration

Continuous Delivery

Continuous Deployment

Use Case Scenario


Click the arrows to see a use case scenario where code is pushed through a
repository and runs through the pipeline.

1. Code Is Pushed

Developers push code to a repository such as GitHub and the repository is used to
securely store the code created by the developers. The GitHub repository is used by
more than 7,000 companies including Airbnb, Netflix, and Shopify.

2. Code Is Run Through the Pipeline

Automated tools such as open source Jenkins detect the changes, pull the code from
the repository, and run the pipeline. By running the pipeline, Jenkins can
automatically build, test, and deploy code changes to the production environment.

DevSecOps Software Development Model


One problem in DevOps is that security often ends up falling through the cracks
because developers move quickly and their workflows are automated. To mitigate this
problem, security should be shifted into code development before code deployment.

In some organizations, a separate team is responsible for security and developers


don't want to slow down for checks and requests. As a result, many developers
deploy their software without going through the proper security channels and
inevitably make harmful security mistakes. To solve this problem, organizations are
adopting DevSecOps.

DevSecOps takes the concept behind DevOps – the idea that developers and IT teams
should work together closely, instead of separately, throughout software delivery –
and extends it to include security with integrated and automated checks into the
full CI/CD pipeline. This takes care of the perception that security is an outside
force and allows developers to maintain their speed without compromising data
security.

2
of
4
Introduction
Security Operations Responsibilities
Introduction
Application Development Platforms and Processes
Security Operations Responsibilities
Knowledge Check

Close

Cloud Security Fundamentals

Security Operations
Responsibilities

Cloud security teams plan, implement, analyze, and remediate security risks. This
lesson describes some of the areas where security teams focus on improving
security.

What is Identity and Access Management (IAM) Security?


IAM security is a security parameter on the cloud that is a powerful tool for
improving cloud security entitlement risk. IAM security is part of the larger Cloud
Infrastructure Entitlement Management (CIEM) area and focuses on detecting gaps
between privileges that are required and privileges that are unneeded. It also
provides visibility into entitlements that could lead to security risks and
enforces remediation to achieve least-privilege access.

IAM Misconfiguration Challenges


An attacker who exploits IAM misconfigurations to perform outside-in and inside-up
techniques, can establish control over your entire cloud environment. With these
“keys to the kingdom," it’s easy to launch varied attacks against your
organization.

The Importance of Least Privilege


The principle of least privilege refers to an information security concept in which
users are given the minimum levels of access or permissions needed to perform their
job functions. By enforcing least privilege across cloud identities, you can reduce
the risk of cyberattacks and data breaches.

Alerts
Alerts are an important part of continually monitoring all of your cloud
environments to detect misconfigurations (such as exposed cloud storage instances),
advanced network threats (such as cryptojacking and data exfiltration), potentially
compromised accounts (such as stolen access keys), and vulnerable hosts. Prisma
Cloud correlates configuration data with user behavior and network traffic to
provide context around misconfigurations and threats in the form of actionable
alerts.

Alert Lifecycle
The following graphic shows the various statuses in the alert lifecycle. Click the
image to enlarge it.

Alert Rules
Alert rules generate alerts based on a policy violation by the resources in the
account groups. Alerts will only be generated if you set up an alert rule. Prisma
Cloud does include an out-of-the-box alert rule, so you may see alerts generated
after you add your cloud accounts.
Notifications and Integrations
In addition, Prisma Cloud provides out-of-box ability to Configure External
Integrations on Prisma Cloud with third-party technologies, such as SIEM platforms,
ticketing systems, messaging systems, and automation frameworks so that you can
continue using your existing operational, escalation, and notification tools.

Course Summary
Now that you have completed this course, you should be able to:

Describe the basic DevOps software development model

Describe the differences between DevOps, SecOps, and DevSecOps

Describe the CI/CD model

Describe IAM security

Describe alerts and notifications

3
of
4
Application Development Platforms and Processes
Knowledge Check

Introduction
Cloud-Native Application Protection
Knowledge Check

Close

Cloud Security Fundamentals

Cloud-Native
Application Protection

This lesson describes how cloud-native applications are protected by using a CNAPP
platform and what protections a CNAPP platform comprises.

Important Terminology
Let's read the important terminology that will be used in this lesson.

Click each tab to read the important terminology in this lesson.

Cloud-Native Application Protection Platform (CNAPP)

Software Development Lifecycle

Cloud Native Computing Foundation

Distributed Cloud
Benefits of CNAPP Protection
CNAPPs provide a unified cloud security solution to help security teams scan,
identify, and remediate security vulnerabilities. Legacy cloud security systems
offered disparate security coverage leaving gaps and blind spots. Additionally,
they had high operational requirements and technical expertise.

Here are some of the core cloud security protections that a complete CNAPP solution
provides.

CSPM - Visibility, Governance, and Compliance


Visibility, governance, and compliance is a key area for true CSPM coverage.
Security standards are essential to prevent successful cybersecurity attacks.

The Importance of Security Standards


To prevent successful attacks, you must ensure your cloud resources and SaaS
applications are correctly configured and adhere to your organization’s security
standards from day one. You also must make sure these applications and the data
they collect and store are properly protected and compliant to avoid costly fines,
brand reputation damage, and loss of customer trust. Most if not all enterprise
security teams must meet security standards and maintain compliant environments at
scale and across SaaS applications.

Compliance Requirements
Click the tabs for more information about each compliance requirement.

Real-Time Discovery
Config Governance
Access Governance
Compliance Auditing
Seamless UX
Real-time discovery and classification of resources and data across dynamic SaaS as
well as PaaS and IaaS environments.

Cloud Workload Protection


Cloud Workload Protection (CWP) provides consistent visibility and control,
including vulnerabilities scanning in the development process, workload protection
at runtime, application control, memory protection, behavioral monitoring, host-
based intrusion prevention and optional anti-malware protection.

Enterprises rely on a mix of VMs, Containers, and Serverless functions which can
all be delivered in various service form factors with various amounts of control.
Unique security requirements for each make consistent workload protection a
challenge. Click each item to learn about each type of cloud native application.
VMs

Traditional monolith applications that typically run on a Linux-based or Windows-


based operating system. Monolithic applications are single unified software
applications which are self-contained and independent from other applications.

Containers

Applications that run on top of VMs or on an enterprise container platform and are
managed by any orchestrator.

Containers-as-

a-Service

Cloud-based service for organizations to manage their virtualized applications,


clusters, and containers to make deployments faster and easier.
On-Demand

Containers

Containers and PaaS applications that run on offerings such as AWS Fargate, Google
Cloud Run, Microsoft ACI, and Pivotal Application Service (now renamed to VMware
Tanzu Application Service).

Serverless

Platforms such as AWS Lambda, Azure functions, and Google functions.

Cloud Code Security


Businesses are recognizing the true potential of cloud computing and are moving at
breakneck speeds to capitalize on the dynamic nature and resilience of cloud-native
technologies. Likewise, businesses are leveraging DevOps, automation and agile
workflows to build and deploy software and cloud infrastructure even faster.

Cloud Code Security addresses the challenge that security teams have when trying to
keep pace with DevOps and infrastructure automation by embedding security
throughout the development lifecycle. In this way, developers can play a part in
securing applications and infrastructure before deployment.

CIEM Protection
Cloud infrastructure entitlement management (CIEM) is the process of managing
identities and privileges in cloud environments. The purpose of CIEM is to
understand which access entitlements exist across cloud and multicloud
environments, and then identify and mitigate risks resulting from entitlements that
grant a higher level of access than they should. CIEM solutions help companies
reduce their cloud attack surface and mitigate access risks posed by excessive
permissions.

Least Privilege Model


The least privilege model is an important security concept for identity management.
The number of identities in a cloud account multiplied by the number of
entitlements each identity has makes for a massive attack surface. The goal of the
least privilege model is to reduce the amount of cloud entitlements an identity has
to only the exact ones they need.

The Four Cs of Cloud Native Security


Additionally, the Cloud Native Computing Foundation (CNCF) defines a container
security model for Kubernetes in the context of cloud native security. In this
model, each layer provides a security foundation for the next layer.
Click the arrows for more information about the four Cs of cloud native security.

Cloud

The cloud (and data centers) provide the trusted computing base for a Kubernetes
cluster. If the cluster is built on a foundation that is inherently vulnerable or
configured with poor security controls, then the other layers cannot be properly
secured.

Clusters

Securing Kubernetes clusters requires securing both the configurable cluster


components and the applications that run in the cluster.

Containers

Securing the container layer includes container vulnerability scanning and OS


dependency scanning, container image signing and enforcement, and implementing
least privilege access.

Code

The application code itself must be secured. Security best practices for securing
code include requiring TLS for access, limiting communication port ranges, scanning
third-party libraries for known security vulnerabilities, and performing static and
dynamic code analysis.

Course Summary
Now that you have completed this lesson, you should be able to:

Describe what a CNAAP is and how it provides cloud security protection

Describe the four C's of cloud native security

Describe the benefits of visibility, governance, and compliance in cloud security

Describe cloud code security

Describe cloud workload protection (CWP)

2
of
3
Introduction
Knowledge Check

Introduction
Prisma Cloud Security
Knowledge Check
Close

Cloud Security Fundamentals

Prisma Cloud
Security Features

This lesson describes how Prisma Cloud's Cloud-Native Application Protection


Platform (CNAPP) solution provides a comprehensive security for cloud native
applications.

Prisma Cloud CNAPP Solution


Prisma Cloud secures applications from code to cloud, enabling security and DevOps
teams to effectively collaborate to accelerate secure cloud-native application
development and deployment.

Prisma Cloud provides CNAPP support for the full cloud application lifecycle under
the Code/Build/Deploy/Run (CBDR) phases.

Cloud Security Posture Management (CSPM)


Prisma Cloud takes a unique approach to CSPM, going beyond mere compliance or
configuration management. Vulnerability intelligence from more than 30 data sources
provides immediate clarity on critical security issues, while controls across the
development pipeline prevent insecure configurations from ever reaching production.

Visibility, Compliance, and Governance


Prisma Cloud is a cloud native security platform that provides visibility,
compliance, and governance for your public cloud accounts in the face of security
threats. Some of the features of the Prisma Cloud security platform are:

Cloud Asset Inventory


Cloud Asset Inventory
Compliance Monitoring and Reporting
Compliance Monitoring and Reporting
Infrastructure-as-code (IaC) Scanning
Infrastructure-as-code (IaC) Scanning

The Compliance Overview is a dashboard that provides a snapshot of your overall


compliance posture across various compliance standards. Click the image to enlarge
it.

Threat Detection
Prisma Cloud provides policies for a myriad of use cases such as detecting account
hijacking attempts, backdoor activity, network data exfiltration, unusual protocol,
and DDoS activity. After a threat is detected, an alert will be generated notifying
administrators of the issue on hand so that they can quickly remediate it.

Some of the anomaly policies provided by Prisma Cloud threat detection in CSPM are:

User and Entity Behavior Analytics (UEBA)


Prisma Cloud analyzes millions of audit events and then uses machine learning to
detect anomalous activities that could signal account compromises, insider threats,
stolen access keys, and other potentially malicious user activities.
Network Anomaly Detection
Prisma Cloud monitors cloud environments for unusual network behavior and can
detect unusual server port or protocol activity, including port-scan and port-sweep
activities that probe a server or host for open ports.

Automated Investigation and Response


Prisma Cloud provides automated remediation, detailed forensics, and correlation
capabilities. Insights combined from workloads, networks, and user activity, data,
and configurations accelerate incident investigation and response.

Navigate to the Investigate page. For UEBA anomaly policies, you can also see a
Trending View of all anomalous activities performed by the entity or user. Click
the image to enlarge it.

Data Security
The Data Security capabilities on Prisma Cloud enable you to discover and classify
data stored in objects and protect against accidental exposure, misuse, or sharing
of sensitive data.

Click the arrows for more information about different features that are included
with Prisma Cloud Data Security.

Data Visibility and Classification

Prisma Cloud provides complete visibility into all objects, including contents by
region, owner, and exposure level. You can fine-tune data identifiers—such as
driver’s license, Social Security number, credit card number, or other patterns—to
identify and monitor sensitive content.

Data Governance

Prisma Cloud includes specific data policies to quickly determine your risk profile
based on data classification and exposure/file types. Enable or disable data
compliance assessment profiles—for example, Payment Card Industry Data Security
Standards (PCI DSS), General Data Protection Regulation (GDPR), System and
Organization Controls Type 2 (SOC 2), and Health Insurance Portability and
Accountability Act (HIPAA)—based on needs and generate audit-ready reports with a
single click.

Malware Detection

Prisma Cloud helps users identify and protect against known and unknown file-based
threats that have infiltrated objects, leveraging the WildFire malware prevention
service to flag any objects that contain malware.

Alerting and Remediation

Prisma Cloud automatically generates alerts for each object based on data
classification, data exposure, and file types. Analysts can take action on alerts
to quickly remediate exposure, tag individual DevOps teams for violations, and
delete any objects that contain malware.

The new Data Dashboard tab provides complete visibility into your objects storage.
The dashboard widgets below give you insight into how many storage buckets and
objects you have, what kind of data is stored in those objects, across which
regions, who owns what, and what is the exposure of the objects. This tab is
available under the Dashboard menu. Click the image to enlarge it.

Course Summary
Now that you have completed this course, you should be able to:

Describe how Prisma Cloud provides security protection for Code/Build, Deploy, and
Run phases

Describe Prisma Cloud's CSPM features

Describe Prisma Cloud's Threat Detection capability

Describe Prisma Cloud's Data Security capability

2
of
3
Introduction
Knowledge Check

Introduction
Day in the Life of a SecOps Analyst
Business Pillar
People Pillar
Processes Pillar
Interfaces Pillar
Visibility Pillar
Technology Pillar

Close

Security Operations Fundamentals

Day in the Life of


a SecOps Analyst

Erik is a SecOps analyst on the Security Operations team and it is his job to
triage alerts to determine if there is a security threat. Before Erik starts his
job, he will need to understand the general concepts of SecOps and the business
goals. Erik will need training and support from the people he interacts with on a
daily basis. While mitigating threats, Erik will need to know the processes to
follow, the teams he will be interacting with, and the technology he will be using
to gain visibility into the network.

Let's go on this journey with Erik to see how he makes his decisions and his plan
of action.

Security Landscape
SecOps are a necessary function for protecting our digital way of life, for
businesses and customers. Most organizations are responding with a fundamental
shift to their cyber security approach - moving away from a collection of point
solutions, ad-hoc entities, and processes toward a more deliberate structure and
the creation of dedicated SecOps to manage and monitor a unified security
architecture.

Goodbye Ad-Hoc Systems


The days of a best-of-breed ad-hoc system are gone. These systems do not
communicate with each other and are too expensive for a company to manage and
maintain individually. A security team needs to have the proper technology
implemented that simplifies data visibility by unifying intelligence from multiple
security tools. With ad-hoc systems, too much time would be needed to coordinate
all of the information from these individual systems, parse the data, and then
compile the data for an analyst to review.

Hello Automation via Security Orchestration


With the influx of massive amounts of data, security processes should be automated
to provide realistic security assessment and functional real-time mitigation. This
can be achieved via security orchestration. By automating processes, you remove
many of the requirements for manual processes or human intervention, which can slow
down the flow of data and interrupt the ability to review and analyze security
issues at a faster rate.

What the Landscape Encompasses


Click the tabs to learn about the risks, problems, target objective, and
deliverables the landscape encompasses.

Risks
Problems
Target Objective
Deliverables
The risk is a catastrophic breach that leads to data exfiltration, substantial
financial loss, a severely tarnished reputation, loss of current and future
clients, and possible legal and regulatory issues coupled with customer
compensation.

An Overview of SecOps
Click the video to hear from Rishi Bhargava, former Vice President of Product
Strategy and the leader in SecOps automation, about the importance of SecOps.

Elapsed time0:00/Total0:00

Security Operations
SecOps - Leads the Charge
The SecOps (also known as Computer Emergency Response Teams, Computer Security
Incident Response Teams, etc.) is a team of security professionals that are
dedicated to monitoring and analyzing activity on networks, servers, endpoints,
databases, applications, websites, and other systems that connect to your network
either locally or from a remote location. The SecOps team's goal is to detect,
analyze, and respond to cybersecurity incidents using a combination of technology
solutions and a set of processes to help mitigate the incidents.

SecOps - Management and Implementation


Security Operations (SecOps) is a collaborative effort between security teams and
operations teams that integrates tools, processes, and technology for protecting
our digital way of life. The concept of SecOps covers your users which include
internal, partners, and customers, your systems, and the data trusted to your
organization. The goal of SecOps is to improve the security posture of the
business, its products, and services by introducing security as a shared
responsibility.

Security Operations Elements


By dividing Security Operations into discrete elements, you can assess the elements
covered in a SecOps, and to what extent. The element map can be used to evolve
Security Operations to provide better prevention and faster remediation. All of the
elements tie back to the business itself. SecOps goals include the development and
operationalization of the capabilities that the business requires.

Click the video to watch how the elements of SecOps is divided into six pillars.

Elapsed time0:00/Total0:00

Main Functions of Security Operations

Security Operations is a function that identifies, investigates, mitigates threats,


and provides continuous improvement. Because Security Operation engineers have the
first interaction with security issues, they are responsible for executing these
actions with the goals to reduce the number of alerts flowing into the SecOps,
access tools to quickly investigate threats, and reduce the time required to
contain a breach.

Click the tabs to learn about how these actions can help protect against security
issues.

Identify
Investigate
Mitigate
Continuously Improve
Identify an alert as potentially malicious and open an incident.

Security Orchestration
Security orchestration is a method of connecting disparate security technologies
through standardized and automatable workflows that enable security teams to
effectively carry out incident response and operations.

Security Orchestration - Automates the Process


Security orchestration as a concept is defined as automation of as many processes
within security operations as possible. Automating processes help remove the manual
processes that were performed by a member of the SecOps team which slows down the
flow and reduces the ability to review and analyze security issues. Automation can
analyze data at a much faster rate to accurately assess, respond to, and then
mitigate the security incident appropriately.

Terminology
Security orchestration uses the following terms to help define its processes.

Click the tabs to learn the term definitions.

Security Automation
Playbooks
Integration
Ingestion
The process of executing security tasks using machine-driven responses to help
ensure consistency in security issues

Components and Technologies of Security Orchestration


The major components and technologies of Security Orchestration are managed by a
special team or administrator that is a subject matter expert on the specific
application or appliance chosen. All of them are almost equally important to the
overall fabric that is used by the automation processes within the enterprise’s
Security Orchestration architecture.

Security Information and Event Management (SIEM)


Monitors multiple sources to collect, correlate, and aggregate data providing
reports, alerts, and information for real-time detection and mitigation.

Threat Intelligence
Collects and correlates data from both internal and external sources to provide
information to determine malicious intent

Endpoint Security
Provides real-time protection for devices such as mobile phones, laptops, and
desktop systems connected to the enterprise network. Endpoint Security can detect,
alert, respond, and mitigate.

Network Security
Hardware and software components that provide protection for the enterprise network
infrastructure. The collection of network security tools plays an extremely
critical part of security with alerting and blocking malicious intent.

Security Operations and Security Orchestration


Security operations is a function that identifies, investigates, mitigates threats,
and provides continuous improvement. Security orchestration automates processes
within Security Operations.

Separate Groups and Processes


Separate Groups and Processes
Function At a High Level
Function At a High Level
Automation Process
Automation Process
Let's Help Erik!
Erik wants to ensure he understands the goals of Security Operations and Security
Orchestration.
Can you remind Erik what is the SecOps team's main goal?

Detect, analyze, and respond to cybersecurity incidents using a combination of


technology solutions and a set of processes to help mitigate the incidents

Improve the security posture of the business, its products, and services by
introducing security as a shared responsibility

Reduce the time required to contain a breach

Connect disparate security technologies through standardized and automatable


workflows

Submit

Show Feedback
When Erik first arrives to work, which component or technology would he use to view
aggregated data about his network?

Network Security

Threat Intelligence

Security Information & Event Management

Endpoint Security

Submit

Show Feedback
Erik has identified the alert and opened an incident in the ticketing system. What
Security Operations function would Erik perform next?

Perform a detail analysis of the alert

Investigate the root cause and impact of the incident

Stop the attack and close the ticket

Adjust and improve operations to stay current with changing and emerging threats

Submit

Show Feedback
2
of
8
Introduction
Business Pillar

Introduction
Day in the Life of a SecOps Analyst
Business Pillar
People Pillar
Processes Pillar
Interfaces Pillar
Visibility Pillar
Technology Pillar

Close

Security Operations Fundamentals

Business
Pillar

The Business pillar defines the purpose of the Security Operations team to the
business and how it will be managed. The Business pillar helps to provide Erik and
the rest of the SecOps team with answers to questions such as "Who do we need to
help protect the business?"; "How will we protect the business?"; "Where are we
going to do this from?"; and "How do we know if what we have in place is working
effectively?"

Both Erik and the SecOps team are responsible for protecting the business. The
reason for Security Operations, for all of the equipment, for everything SecOps
does is ultimately to service one main goal, protect the business. Without the
Business pillar, there would be no need for Erik or the SecOps team.

Elements in the Business Pillar


The elements in the Security Operations Business pillar define the purpose of the
Security Operations team to the business and how it will be managed.

To understand the purpose of the team and the impact to the business, there are
several questions that must be answered.

Mission I Governance I Planning


Mission I Governance I Planning
Budget I Staffing I Facility
Budget I Staffing I Facility
Metrics I Reporting I Collaboration
Metrics I Reporting I Collaboration
Business Objective: Mission, Governance, and Planning
Three elements serve as the fundamental root of the Business pillar.

Click each tab to learn about them.

Mission
Governance
Planning
The “Mission” statement is the fundamental root from which the organization grows
and is the road map that guides the organization on its course. This should include
the objectives of the Security Operations organization and the goals the
organization is expected to achieve for the business. It defines what an
organization is, why it exists, and its reason for being. It is imperative to
socialize the mission statement and get buy-in from executives, this provides clear
expectations and scope of what the Security Operations. The mission statement
should define what actions will be taken, how those actions will be executed, and
what the results are to the business.
Staffing
Staffing of security skills remains one of the biggest challenges of the security
industry, with additional challenges existing for organizations located outside of
major tech hubs. Organizations with these issues should consider in-sourcing
resources (analyst-as-a-service) to alleviate the strain of staffing.

You want to staff the appropriate level of knowledge for each role in the SecOps.
There should be diversification of skills within the security operations
organization such as malware analysis, network architecture, and threat
intelligence. Basic knowledge and skills should overlap among team members in case
there are departures for vacation, illness, or attrition.

Budget
The Budget is developed to strike a balance of what is truly needed. A business-
savvy budgeting resource can help the Security Operations organization navigate
CapEx spending vs. OpEx spending and the expectations of the business. Be aware
that government SecOps have additional considerations around the timing of
elections and possible party-switching, which could result in dramatic budget
shifts.

Click the arrows for the steps to consider when setting the budget.

Step 1

Obtain agreement regarding the mission of the Security Operations and the SecOps.

Facility
The facilities needed for your Security Operations team will depend on how you will
be delivering the service.

A physical SecOps may need separation from other parts of the business, including
the Network Operations Center (NOC). Although these two groups need to tightly
interface with each other, they may need separate spaces to adhere to need-to-know
principles and avoid specific legal issues. Where fusion centers are established,
additional training for the Network Operations staff is required to ensure
adherence to privacy principles.

A facility should include basic locking capabilities, and preferably, an advanced


access schema that includes a two-factor authentication. Virtual SecOps are
composed of team members that do not hold a physical space. They utilize online,
secure portals to monitor traffic. The use of a Virtual SecOps requires extra care
to secure the VPN and endpoint devices that access the security portal, and a
private space must be available for phone calls and discussions within the Security
Operations team.

Metrics
If time is spent gathering metrics that cannot drive change, then they are a waste
of time and can drive the wrong behavior. When determining good metrics for your
business, always keep in mind the mission of the SecOps and the value it provides
to the business. The business wants confidence that they can prevent attacks and
also that if/when a breach does occur then they are able to handle it quickly to
limit the impact.
Poor Metrics
The following are metrics that can drive the wrong behavior:

Mean Time to Resolution (MTTR)


Mean Time to Resolution (MTTR) is a good metric when used in a NOC (where uptime is
key), but it can be detrimental when used in a SecOps. Holding analysts accountable
for MTTR will result in rushed and incomplete analyses. Analysts will rush to close
incidents rather than do full investigations that can feed learning back into the
controls to prevent future attacks. This will not produce better outcomes or
reduced risk for the business.

Number of Incidents Handled


Caution should be taken when measuring metrics based on an individual's
performance. Ranking top performers by number of incidents handled can have skewed
results and may lead to analysts “cherry-picking” incidents that they know are fast
to resolve. Additionally, evaluating individual performance in this way violates
the law in various countries.

Number of Firewalls/Rules Deployed


Counting the number of firewall rules deployed can be a poor metric because 10,000
firewall rules can be in place, but if the first rule is 'any-any' than the rest
are useless.

Number of Feeds into SIEM


Measuring the number of data feeds into a SIEM is similar to counting the number of
firewall rules deployed. If there are 15 data feeds but only one use-case, then the
data feeds aren’t being utilized and are a potentially expensive waste.

Good Metrics
Good metrics should provide insight into whether the business should have
confidence or not. There are two types of confidence to focus on: configuration
confidence and operational confidence.

Configuration Confidence
Configuration confidence is knowing that your technology is configured to prevent
an attack that can be remediated or be analyzed. Click each tab for details about
the questions that need to be answered.

Are the security controls running?

How many changes are occurring outside of the change control policy?

Are the technologies in place configured to best practice?

What percent of features and capabilities are being utilized?

Operational Confidence
Operational confidence is knowing that the right people and processes are in place
to handle a breach if/when it occurs. Click each tab for details on the questions
that need to be answered.

How many events are analysts handling per hour?


Are there repeat incidents flowing into the SecOps?

Is the SecOps handling alerts for known threats?

How often are there deviations in SecOps procedures?

Reporting
Reporting is meant to give an account of what has been observed, heard, done, or
investigated. It is to quantify activity and demonstrate the value the Security
Operations team is providing to the business or client organizations.

Click the arrow for more information about the daily, weekly, and monthly reports.

Daily Reports

Daily reports should include open incidents with details centered on daily
activity.

Collaboration
A set of tools is required to facilitate communication and collaboration within and
around the Security Operations organization.

These tools can include features around ticketing, war room collaboration, shift
turnover, process documentation, and may contain the entirety of the IR
documentation for every event. They can also include communication features such as
email distribution lists, shared inboxes, instant messaging, and video conferencing
tools.

Collaboration tools are often incorporated into other tools and are at high risk of
feature duplication. The Security Operations team should define what the main
tool(s) used will be, which will be the single source of truth, and what
information will be captured. Access to these tools typically extends beyond the
Security Operations organization, especially in the case of war rooms, so access
control must be addressed by the chosen tools.

Let's Help Erik!


Erik and the SecOps team have a detailed plan of the mission, budget, and staffing
requirements. They need to gather metrics to ensure their plan will be effective in
driving change.

What are the three configuration and operational questions they would need to
answer? (Choose three.)

Are the technologies in place configured to best practice?


How many analysts are resolving incidents per day?

How often are there deviations to SecOps procedures?

How many events are analysts handling per hour?

How many firewall and endpoint technologies are in place?

Show Answer

Show Feedback
What details should Erik's weekly reports include?

Open incidents and other daily activity that have been accomplished

Overall effectiveness of the SecOps functions, how long events are sitting in queue
before being triaged, and if staffing in the SecOps is appropriate

Security trends to initiate threat-hunting activities, open and closed cases, and
conclusions of tickets (malicious, benign, false-positive)

All of the above

Show Answer

Show Feedback
What is the first step Erik should consider when setting the budget?

Establish a budget to meet the minimum requirements of the team

Obtain an agreement regarding the mission of the Security Operations

Identify the technology, staff, facility, training, and additional needs

Define the processes needed to change the allocated budget and for emergency budget
relief

Show Answer

Show Feedback
3
of
8
Day in the Life of a SecOps Analyst
People Pillar
Introduction
Day in the Life of a SecOps Analyst
Business Pillar
People Pillar
Processes Pillar
Interfaces Pillar
Visibility Pillar
Technology Pillar

Close
Security Operations Fundamentals

People
Pillar

The People pillar defines who will be accomplishing the goals of the Security
Operations team and how they will be managed.

As a part of the People pillar, Erik received training necessary for him to be able
to triage the alerts in addition to the other processes and functions within the
SecOps. This training provides Erik with the skills necessary to become efficient
at detecting and prioritizing alerts. As Erik’s knowledge increases, he will have
opportunities to grow on the SecOps team. He will also have the skills to advance
in his career to other areas.

Elements in the People Pillar


The elements in the Security Operations People pillar define the roles for
accomplishing the Security Operations team goals and how those roles will be
managed.

Employee Utilization
Employee Utilization
Training
Training
Career Path Progression
Career Path Progression
Tabletop Exercises
Tabletop Exercises
Employee Utilization
Methods should be developed to maximize the efficiency of a Security Operations
team specific to the existing staff.

Security Operations staff are prone to burnout due to console burn out and extreme
workloads. To avoid this, team members should be assigned different tasks
throughout the day. These tasks should be structured and may include:

• Shift turnover stand-up meeting (beginning of shift)

• Event triage

• Incident response

• Project work

• Training

• Reporting

• Shift turnover stand-up meeting (end of shift)

Another tactic to avoid burnout is to schedule shifts to avoid high-traffic commute


times. Depending on the area, 8am-5pm may line up with peak (vehicle) traffic
patterns. Shifting the schedule by two hours could reduce stress on the staff.

Training
Proper training of staff will create consistency within an organization.
Consistency drives effectiveness and reduces risk.

Types of Training Content


Use of a formal training program will also enable the organization to bring on new
staff quickly. Some organizations resort to on-the-job or shadow training for new
hires, which is not recommended on its own. While shadowing other analysts during
initial employment in the SecOps is important, it should not be the only means of
training.

Formal documentation should exist around capabilities, tools, processes, and


communication plans (both internal and external) that new and existing staff can
reference. Enablement plans for new tools should also be contained in the formal
training program. This continuous education requires time and investment and should
be supported by the business.

Career Path Progression


Retaining staff is also important and providing a clear career path is necessary to
achieve this. A role’s definition and skills matrix should be created, and a
maintenance plan established in order to keep the skills matrix up to date. A semi-
annual review of this content is suggested. The SecOps manager also should drive
improvement by working with staff to continually close gaps in skills and to reduce
deficiencies.

Management Is Not Always the Goal


Remember that not all employees want to move into management. There also should be
a technical path. Document the details of the job roles and levels in each path and
share them with the team. Provide education opportunities to help staff move
through their preferred career path.

Tabletop Exercises
Tabletop exercises are planned events where the stakeholders for the SecOps or the
entire security organization walk through a security event to test the processes
and reactions to the type of incident. They can include simulated network activity
or social engineering.

Click the tabs to learn more about tabletop exercises.

Who can participate in tabletop exercises?

When should tabletop exercises be performed?

Let's Help Erik!


Reports of employees feeling overworked and understaffed are being sent to Erik and
the SecOps team.

Which three methods can the SecOps team employ to mitigate employee burnout?
(Choose three.)

Create a plan to move all employees into management roles

Create on-the-job training only, because it's more helpful than reading
documentation

Shift turnover stand-up meeting (beginning or end of shift)

Schedule shifts to avoid high-traffic commute times

Train at least two employees on the same tasks so there is no single point of
failure

Submit

Show Feedback
Which three types of training content can Erik teach to create consistency within
an organization? (Choose three.)

Company security and privacy training

Continuous education training

Incident response training

Event triage training

Tool-feature use training

Show Answer

Show Feedback
Providing education opportunities to SecOps analysts can help Erik's staff grow
into different career paths. What advanced roles are available for the SecOps
analysts?

Tier 2 or Tier 3 Analyst

Team Lead/Shift Lead

SecOps Manager

Threat Hunter

All of the above

Submit

Show Feedback
4
of
8
Business Pillar
Processes Pillar

Introduction
Day in the Life of a SecOps Analyst
Business Pillar
People Pillar
Processes Pillar
Interfaces Pillar
Visibility Pillar
Technology Pillar

Close

Security Operations Fundamentals

Processes
Pillar

The Processes pillar defines the step-by-step instructions and functions that are
to be carried out by the SecOps team for the necessary security policies to be
followed. Processes are a series of actions or steps taken to achieve an end goal.
As part of the Processes pillar, Erik will need to determine the other teams that
should be involved, the scope of the work for each team, and what each team will be
responsible for.

While monitoring the ticketing queue, Erik notices a new set of alerts that has
been sent to the SecOps team by one of the network devices. Based on the alert
messages, Erik needs to determine whether the alert message is a security incident,
so he opens an incident ticket. Erik starts by doing his initial research in the
log files on the network device to determine if the threat is real. After reviewing
the log files, Erik determines that the alert is a real threat. Based on the
Severity Triangle, Erik has determined that the severity level for this alert is
currently High.

Day in the Life of a SecOps Analyst


Click the video to hear Rishi describe a typical work day of a SecOps analyst.

Elapsed time0:00/Total5:13

Elements in the Processes Pillar


The Processes pillar defines the processes and procedures executed by the security
operations organization to achieve the determined mission. Security Operations can
be defined broadly as a function that identifies, investigates, and mitigates
threats. Any person in an organization who is responsible for looking at Security
logs fits the role of Security Operations. Continuous improvement is also an
important activity of a Security Operations organization.

The four main core functions of Security Operations as they related to processes
are identification, investigation, mitigation, and continuous improvement. Click
the tabs to learn about the elements in each core function.

Identification
Investigation
Mitigation
Continuous Improvement
Identify an alert as potentially malicious and open an incident. Elements under the
Identification core are:

Alerting

Initial Research

Severity Triage

Escalation Process

Alerting; Initial Research; Severity Triage; Escalation Process


The majority of a Security Operations analyst’s time is spent in the identification
phase due to the numbers of false positives and low-fidelity alerts that they must
navigate through. Correctly implemented prevention-based architectures and
automated correlation can help reduce the time needed for this phase.

Click the tabs to learn how each element is used to identify potentially malicious
alerts.

Alerting

Initial Research

Severity Triage

Escalation Process

Detailed Analysis
Detailed analysis is an investigation into an incident to determine whether it is
truly malicious, to identify the scope of the attack, and to document the observed
impact.

Click the arrows for more information about the benefits of detailed analyses.
Gather Relevant Information

Detailed analysis ensures that all relevant information is gathered, including:

• The potential impact of the security incident

• The affected assets

• The adversary’s objective

• The potential impact of containment measures

Determine True Incidents

It also helps to confidently determine if an incident is a true incident or a false


positive. In the event of a false positive, feedback should be provided to the
content engineer so they can tune alerts or to the security engineering team so
that they can update controls.

Close Remaining Gaps

The detailed analysis procedure closes any remaining gaps that were left after the
initial research. Also, affected IT assets are identified and business services are
conducted. The available containment measures should be evaluated to determine
whether they were effective at mitigating the threat and produced the intended or
desired results.

Breach Response; Mitigation; Preapproved Mitigation Scenarios; Interface


Agreements; Change Control
Once an incident has been validated, the mitigation process begins.

Click the tabs to learn how each element is tied into the mitigation process.

Breach Response
Mitigation
Mitigation Scenarios
Interface Agreements
Change Control
A true breach requires a plan separate from standard mitigation that defines how to
effectively respond during a critical severity incident. The first piece of this
plan is to identify the cross-functional stakeholders, including corporate
communications, legal teams, and third parties as appropriate. Then assign a
timeline of when each stakeholder should become involved and how they will be
initially notified. Define the details of the information to be collected and
shared by the Security Operations team and the SecOps commander responsible for
providing the information to the stakeholders. Training and policies should be
created to prevent leaks of the breach details beyond the breach response team.

Breach response plans should be periodically tested, typically a few times per
year, and at least once without the security team having prior knowledge of the
test.

Tuning; Process Improvement; Capability Improvement; Quality Review


Change this to a text narrative component because when the accordion expands,
learner will have to scroll screen to read.

Click the tabs to learn how each element is an important factor for you to ensure
that you keep up with new technologies, tactics, and threats.

Tuning

Process Improvement

Capability Improvement

Quality Review

Let's Help Erik!


An attacker has infiltrated Pumpice. Erik and the SecOps team classify this attack
as 1 - Critical. The attacker is still retrieving sensitive data from Pumpice's
employees, so the SecOps team has not yet mitigated the attack.

What could Erik and the team do if they wanted to reclassify the severity level of
the attack?

The team can reclassify the severity to 3 - Medium because the team is already
working on mitigating the issue.

Nothing. Severity 1 - Critical indicates a breach and is the highest severity


level.

The team can reclassify the attack as a Severity 0 to indicate an ongoing breach
where the attacker is attempting to exfiltrate, encrypt, or corrupt data.

The team can reclassify the severity to 5 - Informational, because the attack has
already been identified.

Submit

Show Feedback
What are three relevant information that Erik and the SecOps team's detailed
analysis investigation can gather? (Choose three.)
How the alert should be triaged

The potential impact of the security incident

Where the attacker will exfiltrate data from next

The adversary's objective

Whether the incident is a true incident or a false positive

Show Answer

Show Feedback
What parameter can Erik and the SecOps team use that allows for the immediate
containment or prevention of a security incident without further approvals?

Automatic mitigation scenarios

Automatic resolution scenarios

Pre-approved breach scenarios

Pre-approved mitigation scenarios

Submit

Show Feedback
5
of
8
People Pillar
Interfaces Pillar

Introduction
Day in the Life of a SecOps Analyst
Business Pillar
People Pillar
Processes Pillar
Interfaces Pillar
Visibility Pillar
Technology Pillar

Close

Security Operations Fundamentals

Interfaces
Pillar

Security operations is not a silo and needs to work with many other functions or
teams. Each interaction with another team is described as an interface. The
Interfaces pillar defines which functions need to take place to help achieve the
stated goals, and how the SecOps will interface with other teams within the
organization by identifying the scope of each team’s responsibilities and the
separation of each team’s duties.
As Erik is investigating the alert generated by the network device, he partners
with the Threat Intelligence Team to identify the potential risks this threat may
pose to the organization. Erik also interfaces with the Help Desk, Network Security
Team, and Endpoint Security Teams to determine the extent the threat has
infiltrated the network.

Elements in the Interfaces Pillar


Interfaces should be clearly defined so that expectations between the different
teams are known. Each team will have different goals and motivations that can help
with team interactions. Identifying the scope of each teams responsibility and
separation of duties helps to reduce friction within an organization. The
Interfaces are how processes connect to external functions or departments to help
achieve security operation goals.

The Need for Agreements


You should understand the needs, goals, and business mission of other departments
to eliminate friction between their teams and the Security Operations team. Some of
the various department goals that can put other departments in conflict with the
SecOps team is not following the initial agreements that were initially agreed
upon.

Click the tabs to learn about the function's or team's goals and motivations.

Elements 1
Elements 2
Elements 3
Elements 4
Elements 5
Elements 6
Help Desk - Close tickets quickly

IT Operations - Availability and performance of IT infrastructure

DevOps - Develop, implement, and maintain applications; release bug-free features


quickly

Enterprise Architecture; Governance, Risk, and Compliance; Business Liaison

Fostering a unified approach to security, Enterprise Architecture, Governance,


Risk, and Compliance, and Business Liaison teams collaboratively enhance the
effectiveness of Security Operations.

Enterprise Architecture
The enterprise architecture team is responsible for understanding, developing, and
maintaining both the physical and virtual network designs to meet the business
requirements. The team ensures that security is implemented in the design phase and
not added as an afterthought. It also creates and maintains the architecture
flowcharts and diagrams. The goal of the enterprise architecture team is to balance
and meet the needs of both security and the business.

Governance, Risk, and Compliance


The governance, risk, and compliance (GRC) function is responsible for creating the
guidelines to meet business objectives, manage risk, and meet compliance
requirements. Common compliance standards include PCI-DSS, HIPAA, and GDPR, and
require different levels of protection, encryption, and data storage. Those
requirements are typically handled by other groups. However, the breach disclosure
requirements directly involve the Security Operations team. The SecOps team must
interface with the GRC team to define escalation intervals, contacts,
documentation, and forensic requirements.

Business Liaison
A growing trend is for security organizations to hire business liaisons. This role
is to tie in to the different aspects of the business and help identify and explain
the impact of security. This includes keeping up-to-date with new product launches
and development schedules, onboarding new branch offices, and handling mergers and
acquisitions where legacy networks and applications need to be brought in to the
main security program. This role can also be responsible for partner, vendor, and
team interface management.

Help Desk; Information Technology Operations; DevOps; Operational Technology Team


Teams such as the help desk, information technology operations, DevOps, and the
operational technology team, work together to ensure that the organization is
protected from all angles.

Click the tabs to learn about each function or team.

Help Desk
Information Technology Operations
DevOps
Operational Technology Team
The help desk provides end-user support for corporate IT assets. The Security
Operations team frequently open tickets with this team to reimage machines, request
system patching, or reject assets joining the network without the proper OS and app
version levels. The help desk organization should interface often with the
vulnerability management team for tasks such as patches, outdated operating
systems, accepted new operating systems, and new supported platforms. Interaction
with the vulnerability management team can result in the development of automated
tasks. A closed-loop process between the teams should exist to ensure follow-
through on IT requests.

SecOps Engineering
The SecOps engineering team is responsible for the implementation and ongoing
maintenance of the Security Operation team’s tools, including the SIEM and analysis
tools.

The responsibilities of the team must be clearly defined. SLAs with the team should
be defined to reduce potential friction between teams and to establish a clear
communication plan. See the two questions you should ask.

Endpoint Team; Networks Team; Cloud Security Team


To strengthening cybersecurity collaboration, the SecOps Team Interfaces with
Endpoint, Network, and Cloud Security Teams.

Click the tabs to learn about each team's responsibilities.

Endpoint Security Team


Network Security Team
Cloud Security Team
The endpoint security team is responsible for developing, implementing, and
maintaining the endpoint security policy. The scope of the team’s responsibilities
may extend into tool selection, implementation, and maintenance, including endpoint
protection platform (EPP) and endpoint detection and response (EDR) capabilities.
An interface should be defined between the endpoint security team, the team
implementing the endpoint security policy, and the infrastructure team deploying
the technology within change control processes. The change control process should
include any specific information that is required for endpoint security updates and
should follow the standard change control steps established for other changes
within the business.

The endpoint security team must interface with the business to define which
endpoint technologies and operating systems will be allowed and to address security
concerns about them. The practice of interfacing directly with the SecOps is also
fast becoming standard because the endpoint telemetry collected from EDR is a
beneficial source of information for security alert triage and incident
investigation.

Threat Hunting; Content Engineering


Threat hunting is often thought of as a function of the Security Operations team.
However, because it is separate from identify, investigate, and mitigate, it is
distinct from the analyst activities and is included as an interface. Content
engineering is the function that builds alerting profiles that identify the alerts
that will be forwarded for investigation.

Threat Hunting
Hunting allows you to dig into the data to find situations that the machines and
automation may have missed. Threat hunting can be structured or unstructured.
Structured hunts begin with a single piece of intelligence. Then a hypothesis is
formed, and then the hunt to find the threat in the network begins. Formalized
structured hunts tend to be more useful to an organization than unstructured
efforts.

Content Engineering
The content engineer and the Security Operations team need to be tightly interfaced
and feedback needs to continuously flow. An interface agreement between the teams
needs to be created to identify how often content updates will be made, how they
will be vetted, and the feedback process. It should identify how the Security
Operations team and threat hunting team make requests for new alerts or
modifications to existing alerts. Properly configured alerts will allow the
Security Operations team to focus on important alerts that require further
investigation.

Security Automation
Automation helps ensure consistency through machine-driven responses to security
issues. A security automation function will own and maintain these automation
tools.

Security automation must be integrated with the Security Operations team to


maintain the automation playbooks. The security automation team is also responsible
for implementing new automation technology and playbooks in response to new
workflows and processes defined by the Security Operations team. The requirements
and eventual vetting of the solution should be the responsibility of the Security
Operations teams. When security automation is vetted, consider the time savings,
accuracy, and usefulness of the automation. Always consider the return on
investment and the ongoing cost of maintenance and support before investing in
automation.

Forensics & Telemetry


Forensics and telemetry provide the data needed to perform the different types of
investigation from severity triage to detailed analysis and hunting.

Click the tabs to learn about telemetry and forensics.

Telemetry

Forensics

Types of Collected Data


Every security team must use both telemetry and forensics. Telemetry from network
and endpoint activity and from cloud configurations will provide readily available
information necessary to triage and investigate the majority of alerts and
incidents. Forensic data supplements telemetry and provides the information needed
to conclude the small number of high-priority or difficult incident investigations
that often lead to breach identification. Should a breach be validated, all data
and results will be required by government and regulatory bodies.

The following are the details about the types of data that are collected.

Alert
Alert
Event
Event
Log
Log
Telemetry
Telemetry
Forensic (Raw)
Forensic (Raw)
Threat Intelligence Team; Red & Purple Teams
Threat intelligence function identifies potential risks to the organization that
was not observed in the network. Red and purple teams provide penetration testing
to simulate threats to the organization and provide feedback for improvements to
the Security Operations organization.

Threat Intelligence Team

The threat intelligence team uses real-time information feeds from human and
automated sources about the background, details, specifics, and consequences of
present and future cyber risks, threats, vulnerabilities, and attacks. They are
responsible for validating threats and then working with the Security Operations
team to provide IoCs for the analysts and to update controls. The Threat
Intelligence Team delivers threat landscape reports at agreed-upon intervals to
security teams that are responsible for updating the security stack based on their
findings.

Red and Purple Teams

The red team simulates advanced persistent threats (APTs) and will attempt to hide
and slow-play their attacks to avoid detection by SecOps analysts. Purple teams
work with both the red and SecOps teams to help improve operations. They provide
information to the red team about gaps in an analyst’s focus areas and guide the
SecOps team toward approaches to identify red team efforts. Red and purple team
exercises should have an allotted time limit, and the results should be given as
feedback to the SecOps to improve capabilities, add processes and procedures, and
add controls before an actual APT gains hold of the network.

Vulnerability Management Team


It is responsible for identifying and escalating vulnerabilities in an
organization’s assets, including hardware and software. The vulnerability
management team uses vulnerability scanning technology and other tools to discover
vulnerabilities.

The SecOps and vulnerability management teams need an interface to define the
visibility and access required by the Security Operations team and to update each
other about new observations such as possible malware or newly announced
vulnerabilities. After a new vulnerability is announced, the vulnerability
management team will work with the Security Operations team to implement controls
to prevent attacks while the patching process is executed. The Security Operations
team needs to stay updated about these new controls so that it can properly address
any alerts that reach the SecOps.

Let's Help Erik!


Erik needs to make operational changes to cloud technology such as SaaS, PaaS, or
IaaS.

Which team can Erik turn to for assistance for operational changes to cloud
technology?

Help Desk Team

DevOps Team

Operational Technology Team

Information Technology Operations Team

Show Answer

Show Feedback
Activity gathered by Erik and the SecOps team electronically and in real-time from
a given source is called?

Telemetry

Log

Forensic (raw)

Alert

Submit

Show Feedback
Erik's SecOps team is divided into groups with different functions. Which three
teams are responsible for the development, implementation, and maintenance of
security policies?
Endpoint Security, Network Security, and Cloud Security

Enterprise Security, Endpoint Security, and Cloud Security

HelpDesk Security, Operational Security, and Information Technology Security

Telemetry Security, Forensics Security, and Threat Intelligence Security

Submit

Show Feedback
6
of
8
Processes Pillar
Visibility Pillar

Introduction
Day in the Life of a SecOps Analyst
Business Pillar
People Pillar
Processes Pillar
Interfaces Pillar
Visibility Pillar
Technology Pillar

Close

Security Operations Fundamentals

Visibility
Pillar

The Visibility pillar enables the SecOps team to use tools and technology to
capture network traffic, limit access to certain URL’s determine which applications
are being used by end users, and to detect and prevent the accidental or malicious
release of proprietary or sensitive information.

Before Erik can provide a detailed analysis of the threat, he will need to gather
all of the necessary information to make a well-informed decision. Network
visibility is needed for Erik to gather information about the network’s status, the
traffic passing through the network, and the conditions on which traffic is allowed
to pass through. Without network visibility, Erik may miss important data that
could lead to a real threat being treated as a false positive or missed altogether.
The better visibility Erik has into every aspect of the company’s network, the
better he and the SecOps team can make an informed decision.

Elements in the Visibility Pillar


The concept of visibility in Security Operations refers to intelligence and
awareness. How can you make decisions if you are not aware of what is taking place
in your network? How can you take actions if you don't have knowledge or
intelligence to act on?

In Security Operations, network visibility is needed for information about the


network’s status, about the traffic passing through it, and the conditions on which
traffic is allowed to pass through. Visibility enables us to perceive slightly
ahead to help anticipate, prepare, and react to changes.

The following elements are in the Visibility pillar.

Network Traffic Capture; Endpoint Data Capture


Network traffic capture is the interception and logging of traffic traversing your
network appliances. Endpoint data capture is the collection of information
generated on or visible to endpoint devices.

Click each tab to learn about the two elements that capture data.

Network Traffic Capture


Endpoint Data Capture
Network traffic can be captured by firewalls, IDS/IPS, proxies, routers, switches,
and standalone traffic capture technologies. Logging your network traffic provides
the Security Operations organization with the visibility to view traffic for the
purpose of doing detailed analysis and advanced investigations. Analysts should
have access to raw traffic logs when specific traffic is associated with an alert
or when a staff member makes a query.

Cloud Computing
Cloud computing delivers services or applications, on-demand, to achieve increased
scalability, transparency, security, monitoring, and management. In cloud
computing, services are delivered using either a private, public, or hybrid cloud.

The use of cloud computing requires a cybersecurity policy enforcement point to


apply enterprise security policies for cloud-based resources. The types of security
policy enforcement can include single sign-on, authentication and authorization,
device profiling, and step-up authentication challenges.

Log collection will be most heavily used by the SecOps. Log collection provides
both in-depth forensic data and correlated event data to the SecOps to ensure that
security analysts can analyze incidents without becoming overwhelmed with noise.
The visibility required from these logs should be defined based on what the SecOps
team requires for proper investigation and on the access level that analysts will
have. The SecOps also needs to understand which types of alerts will be generated
by the cloud security capabilities. Those alerts should be worked into the incident
response plan.

Application Monitoring; SSL Decryption; URL Filtering


Application monitoring provides the ability to determine and log the specific
application used in a session. SSL decryption technology provides visibility into
HTTPS traffic, which is then logged in a readable format. URL filtering gives
security organizations the control to track and restrict access to specific URLs
and URL categories.

Click the arrow to learn more about each element.

Application Monitoring

By monitoring applications, the SecOps team can gain additional context about
specific applications that were used when an event was triggered. It goes beyond
port identification and recognizes the application used, which can lead credence to
proving an IoC was enacted or that the event triggered was a false positive.

Data Loss Prevention


Data loss prevention (DLP) is a cybersecurity control to detect and prevent the
accidental or malicious release of proprietary or sensitive information.

The controls for DLP are often defined by GRC and managed by the network, endpoint,
and cloud security teams. A DLP system helps prevent data exfiltration and makes
the notification of attempts to send proprietary or sensitive information to the
SecOps. The SecOps then uses these notifications to look for a potential incident
or APT in the network.

Threat Intelligence Platform; Vulnerability Management Tools; Analysis Tools


Enhancing cybersecurity protection requires a multifaceted approach. By integrating
threat intelligence platforms, vulnerability management tools, and advanced
analysis techniques, Security Operations teams can effectively address and mitigate
risks.

Threat Intelligence Platform


Threat Intelligence Platform
Vulnerability Management Tools
Vulnerability Management Tools
Analysis Tools
Analysis Tools
Asset Management; Knowledge Management; Case Management
Achieving optimal Security Operations efficiency hinges on the integration of asset
management, knowledge management, and case management. These three elements work in
harmony to streamline SecOps processes and enhance collaboration, bolstering
overall security efforts.

Click each tab for more information about each.

Asset Management

Knowledge Management

Case Management

Let's Help Erik!


The CEO of Pumpice has asked Erik and the team to send a status report to the
entire organization regarding current security incidents and their statuses.
Luckily, the team has this information ready to communicate to the organization.
What tool or technology can Erik and the SecOps team use to provide visibility into
HTTPS traffic to find IOCs or high-fidelity indicators?

Application Monitoring

SSL Decryption

URL Filtering

Data Loss Prevention

Submit

Show Feedback
What tool or technology can Erik and the SecOps team use to detect and prevent
accidental or malicious release of proprietary or sensitive information?

Vulnerability management

URL Filtering

SSL Decryption

Data Loss Prevention (DLP)

Submit

Show Feedback
What management method did the SecOps team utilize to collect information on
security incidents and their statuses?

Case management

Knowledge management

Asset management

Threat management

Submit

Show Feedback
7
of
8
Interfaces Pillar
Technology Pillar

Introduction
Day in the Life of a SecOps Analyst
Business Pillar
People Pillar
Processes Pillar
Interfaces Pillar
Visibility Pillar
Technology Pillar
Close

Security Operations Fundamentals

Technology
Pillar

The Technology pillar includes tools and technology to increase our capabilities to
prevent or greatly minimize attempts to infiltrate your network. In the context of
IT Security Operations, technology increases our capabilities to securely handle,
transport, present, and process information beyond what we can do manually. By
using technology, you amplify and extend your abilities to work with Information in
a secure manner.

The threat that Erik detected at the beginning of our scenario has been mitigated.
Erik now needs to work with SecOps team members and other teams to determine if the
current network technology can be used to automate a process or response to
automatically remediate this issue, or similar issues that may arise.

Technologies in SecOps
Click the video to hear Rishi explain how technology is shaping SecOps.

Elapsed time0:00/Total5:32

Elements in the Technology Pillar


Technology can be broadly defined as anything that can extend and deepen the
effectiveness of our skills and enable us to perform tasks more quickly and
efficiently. In the context of Security Operations, information technology (IT) can
help us increase our capabilities to securely handle, transport, present, and
process information beyond what we can do manually or by hand.

Technology also helps Security Operations to complete its core mission to identify,
investigate, and mitigate any attack made on a network. The mission to continuously
improve a network always drives the creation of better technologies.

These are the elements that are in the Technology pillar.

Firewall; Intrusion Prevention and Detection System; Malware Sandbox


Strengthening cybersecurity relies on the strategic integration of firewall,
intrusion prevention and detection systems, and malware sandbox tools to enhance
visibility, protection, and threat analysis in Security Operations.

Click the tabs to learn about each technology.


Firewall

Intrusion Prevention and Detection Systems

Malware Sandbox

Endpoint Security; Behavioral Analytics


Endpoint security provides control for detecting and protecting servers, PCs,
laptops, phones, and tablets from attacks such as exploits and malware. Behavioral
analytics is a security technology that detects malicious activity by identifying
anomalous behavior indicative of attacks.

Endpoint Security
Endpoint security can include antivirus, EDR, analytics, and device control. The
Security Operations organization should define which data should be captured and
forwarded to the SIEM, or a central security log management function. The Security
Operations organization should understand how to identify data or system exposure
and should understand the mitigation options that are available. An interface
agreement should be created to determine how mitigation strategies will be
executed, how to request changes, and how the changes will be validated.

Behavioral Analytics
Behavioral analytics starts by inspecting endpoint, network, or user data to
automatically classify user and device types and then develops a baseline of
expected behavior. Behavioral analytics compares your current behavior to past
behavior and peer behavior to identify anomalous activity. Machine learning models
can improve accuracy by tailoring detection thresholds to each organization’s
environment.

Behavioral analytics can uncover malware, ransomware, exploits, lateral movement,


exfiltration, insider threats, and risky user behavior. The Security Operations
team should be aware of which types of events they will encounter and incorporate
resulting alerts into the incident response process.

Email Security; Web Application Firewall


Email security detects and prevents malicious email content from infecting the
recipients and provides protection from phishing scams. A web application firewall
(WAF) is a security control that protects HTTP applications from well-known HTTP
exploits.

Click the tabs to learn about each technology.

Email Security
Web Application Firewall
Email security functionality supports properties for confidentiality, digital
signatures, sender authentication, and integrity control using cryptographic
controls. Information from email security systems should be provided to the SecOps
so it can investigate credential loss issues. A new feature of email security is
Domain Message Authentication Reporting and Conformance (DMARC), which is an email
authentication, policy, and reporting protocol that allow email senders and
receivers to work together to better secure emails and users.

Honey Pots & Deception


Honey pots and other deception techniques are used to detect, deflect, and
counteract malicious activities against an organization.

Provide a Lure

A honey pot provides a lure, false lead, and shadow network to draw an attacker
into a controlled environment so that its actions can be studied.

Understand Techniques

Honey pots and deception can be used to help the SecOps understand the techniques
being used to exploit their defenses and thus can lead to new uses cases for alert
generation and updated controls.

Decipher the Threat Landscape

A honey pot can also be used to decipher the threat landscape and the types of
campaigns that are targeting the organization and vulnerabilities in IT operations.

Virtual Private Networks; Mobile Device Management; Network Access Control;


Identity & Access Management
Safeguarding corporate networks involves utilizing virtual private networks, mobile
device management, network access control, and identity and access management tools
to enhance security and minimize threats from remote users and unauthorized
devices.

Click the tabs for more information about each tool.

Virtual Private Networks


Mobile Device Management
Network Access Control
Identity and Access Management
Virtual private networks (VPNs) allow remote users to securely participate on the
corporate network from an external network location. Traffic across the VPNs
requires special security policies because the traffic is seen as part of the
trusted network and connections may not be subject to the same firewall and IDS/IPS
controls used for external traffic. The SecOps requires visibility into VPN traffic
to monitor for remote users and application anomalies.

Security Information & Event Management


A security information and event management (SIEM) is used as a central repository
to ingest logs from all corporate-owned systems. SIEMs collect and process audit
trails, activity logs, security alarms, telemetry, metadata, and other historical
or observational data from a variety of different applications, systems, and
networks in an enterprise.

Before a SIEM can operate properly, connectors, and interfaces are required to
ensure translated flow from the system of interest to the SIEM data lake. The
Security Operations organization should define how ownership of an event will be
established and identify where an analyst will go to receive alerts. Sometimes an
analyst will use the SIEM, but in other cases an analyst will use a Security
Orchestration, Automation, and Response (SOAR) platform or ticketing system.
The selected SIEM approach should address any governance, risk, and compliance
requirements for the separation of data, privacy, and retention times. You can
limit data redundancy between the SIEM and feeder systems to help control costs and
use offline storage for long-term compliance needs.

Security Orchestration, Automation, and Response


According to Gartner, SOAR refers to technologies that enable organizations to
collect inputs monitored by the Security Operations team. For example, alerts from
the SIEM system and other security technologies where incident analysis and triage
can be performed by leveraging a combination of human and machine power can help
define, prioritize, and drive standardized incident response activities.

Click the arrows for more information about SOAR features.

SOAR Systems Allow Incident Response

SOAR systems allow for accelerated incident response through the execution of
standardized and automated playbooks that work upon inputs from security technology
and other data flows.

SOAR Tools Ingest Alerts

SOAR tools ingest aggregated alerts from detection sources such as SIEMs, network
security tools, and mailboxes before executing automated, process-driven playbooks
to respond to these alerts. The playbooks coordinate across technologies, security
teams, and external users for centralized data visibility and action.

SOAR Provides Consistency

SOAR provides consistency by standardizing processes, which improves operational


confidence in SecOps capabilities.

Let's Help Erik!


In the past month, Erik and the SecOps team have been receiving more alerts than
usual.

Erik is concerned that some of these alerts may be critical and the team will need
help mitigating all of them. What should Erik do?

Deploy more SIEMs to collect and process the data before having a SecOps analyst
interpret the data and take appropriate action

Deploy additional endpoint security to protect servers, PCs, laptops, and tablets
so that alerts that are missed can be caught before exfiltrating data from the end
user

Deploy SOAR technologies so he can accelerate incident response and automatically


execute process-driven playbooks to mitigate critical alerts

Deploy more firewalls to protect the network while SecOps analysts are interpreting
data and taking appropriate action
Submit

Show Feedback
What tool or technology can provide Erik and his SecOps team control for the
provisioning, maintenance, and operation of user identities?

Identity and access management

Mobile device management

Network access controls

Virtual private networks

Submit

Show Feedback
What security technology can Erik and the SecOps team use to identify anomalous
behavior indicative of attacks?

Endpoint security analytics

Behavioral analytics

Malware analytics

Honey pot analytics

Submit

Show Feedback
Course Summary
Now that you've completed this course, you should be able to:

Describe the main functions of security operations

8
of
8
Visibility Pillar

Introduction
SOAR Technology
Knowledge Check

Close

Security Operations Fundamentals

SOAR
Technology

This lesson describes SOAR as a technology and how it is the automation of the
orchestration of all the elements of security operations.

SOAR Components
SOAR, or Security Orchestration, Automation, and Response, comprises of three
components: orchestration, automation, and response.

Orchestration
Orchestration
Automation
Automation
Response
Response
The Importance of SOAR
SOAR is intended for automated orchestration of the interaction amongst all the
elements and to provide coordination of these interactions. SOAR is critical to the
future of security operations.

Video: Business Value of SOAR


This video explains the importance of SOAR and how SOAR is critical to the future
of security operations.

Elapsed time0:00/Total0:00

SOAR Systems
SOAR systems allow for accelerated incident response through the execution of
standardized and automated playbooks that work upon inputs from security technology
and other data flows. Almost every organization that’s serious about security has a
Security Information and Event Management (SIEM) tool deployed in its environment.

SOAR

SOAR tools ingest aggregated alerts from detection sources (such as SIEMs, network
security tools, and mailboxes) before executing automatable, process-driven
playbooks to enrich and respond to these alerts. The playbooks coordinate across
technologies, security teams, and external users for centralized data visibility
and action. They help accelerate incident response times and increase analyst
productivity. Because playbooks standardize processes that create better
consistency, confidence in the operation of security operations capabilities
improves.

SIEM

SIEM’s collect disparate pieces of data and aggregate them into alerts. SIEM tools
and security orchestration tools have some feature similarities on the surface such
as automation of actions, product integrations, and correlation of data. SIEM tools
monitor various sources for machine data, correlate and aggregate them for context,
and provide real-time detection and monitoring of alerts generated by applications
and network hardware.

Parts of Security Orchestration


Security orchestration is a method of connecting disparate security technologies
through standardized and automatable workflows that enables security teams to
effectively carry out incident response and security operations. There are three
parts to security orchestration: security technologies, workflows/playbooks, and
security teams.

Security Technologies
SOAR tools integrate with the other security and non-security tools that an
organization uses to provide teams with a central console for coordinating and
activating all these tools. These integrations enable inter-product conversations,
data transfer, and remote execution of commands.

Product integrations within security orchestration tools can be either


unidirectional or bidirectional. A unidirectional integration only allows for
transfer of data from the integrated security product to the security orchestration
tool. A bidirectional integration allows for two-way transfer of data.

For example, a security orchestration tool that has a bidirectional integration


with an endpoint tool can ping the endpoint tool for device details, asset data,
infected endpoints, and similar information. The security orchestration tool can
also perform create, read, update, and delete (CRUD) actions on the endpoint tool,
such as quarantining an endpoint or updating an indicator blocklist.

Workflows/Playbooks
Playbooks (runbooks) are task-based graphical workflows that help visualize
processes across security products. These playbooks can be automated, manual, or
both.

Click the tabs to learn about the building blocks that compose playbooks.

Playbook Trigger
Automated Playbook Task
Manual Playbook Tasks
Conditional Tasks
A playbook that is meant to automatically execute within a security orchestration
tool needs a trigger point. This trigger point can be any condition that, when met,
results in the start of the playbook. For example, whenever a phishing email is
ingested from a mailbox into the security orchestration tool, a ‘phishing response’
playbook can be triggered and begin its execution.

Common SOAR Playbooks


The following is the information about the common SOAR playbooks.

Phishing Enrichment and Response


Phishing Enrichment and Response
Threat Hunting
Threat Hunting
IoC Enrichment
IoC Enrichment
Incident Severity Assignment
Incident Severity Assignment
Cloud Security Orchestration
Cloud Security Orchestration
Security Teams
SOAR playbooks enable security teams to effectively carry out incident response and
security operations.

The following describes how playbooks can help security teams.

Manual Tasks

When an action is too unique, nuanced, or infrequent to be automated, security


orchestration playbooks can have manual tasks that act as directives for the SecOps
analyst handling the respective incident.

Task Approval

Even if some actions are prime candidates for automation, they might be too
sensitive to carry out without having a human verify their need and relevance. In
such cases, automated actions can have built-in task approvals. These actions will
wait for the relevant SecOps analyst’s approval before beginning execution.

End-User Engagement

A SOAR tool that has rich integrations with email tools can be used to engage
SecOps analysts and end users within the organization and thus improve overall
process flow.

Security Gaps and Risks


All security challenges cause concern because they present financial and business
risks. Some breaches may not create serious financial risks, but others can be
devastating. Breaches also can cause serious damage to corporate reputation,
especially if sensitive data or internal content is posted online.

Factors That Contribute To Financial Risk


A Ponemon study identified the average cost of a breach can be in the millions. If
regulation fines such as those related to General Data Protection Regulation (GDPR)
in the European Union also are included, the problem for organizations that are
hacked is exacerbated. The following are factors that can contribute to financial
risk:

How Security Orchestration Fills Gaps


There are critical gaps that still exist in security. Security suffers when there
is a lot of data but little follow-up, a lack of product interconnectivity, and a
largely siloed workforce. Security orchestration is well placed to fill these gaps
by leveraging multi-source data ingestion and correlation, an extensible product
integration network, and playbooks and collaboration features that democratize a
security team’s knowledge.

Click the tabs to learn about the benefits of security orchestration.

Accelerates Incident Response

Standardizes and Scales Processes

Unifies Security Infrastructures

Increases Analyst Productivity

Leverages Existing Investments

Improves Overall Security Posture


Course Summary
Now that you've completed this course, you should be able to:

Identify the main components of a SOAR solution

Describe the three parts of security orchestration

Identify security gaps and risks

2
of
3
Introduction
Knowledge Check

Introduction
Endpoint Detection and Response
Knowledge Check

Close

Security Operations Fundamentals

Endpoint Detection
and Response

This lesson describes how Cortex XDR protects endpoints and prevents attack
lifecycle with endpoint detection and response (EDR) in a single agent.

Cortex XDR: Endpoint Protection


Adversary strategies have evolved from simple malware distribution to a broad set
of automated, targeted, and sophisticated attacks that can bypass traditional
endpoint protection.

This evolution has forced organizations to deploy multiple products from different
vendors to protect against, detect, and respond to these threats. Cortex XDR brings
powerful endpoint protection together with endpoint detection and response (EDR) in
a single agent. You can replace all your traditional antivirus agents with one
lightweight agent that shields your endpoints from the most advanced adversaries by
understanding and blocking all elements of attacks.

Primary Attack Methods


Although attacks have become more sophisticated and complex, they still use basic
building blocks to compromise endpoints.

The primary attack methods continue to exploit known and unknown application
vulnerabilities as well as deploying malicious files, including ransomware. These
attack methods can be used individually or in various combinations, but they are
fundamentally different in nature:

Exploits
Exploits
Malware
Malware
Ransomware
Ransomware

Prevention of Attack Lifecycle


Due to the fundamental differences between malware and exploits, effective
prevention must protect against both.

The Cortex XDR agent combines multiple methods of prevention at critical phases
within the attack lifecycle to halt the execution of malicious programs and stop
the exploitation of legitimate applications, regardless of an operating system, the
endpoint’s online or offline status, or whether the endpoint is connected to an
organization’s network or roaming.

Course Summary
Now that you've completed this course, you should be able to:

Distinguish between exploits and malware


Describe technique-based exploit prevention and behavioral threat protection

2
of
3
Introduction
Knowledge Check

Introduction
SOAR Technology
Knowledge Check

Close

Security Operations Fundamentals

SOAR
Technology

This lesson describes SOAR as a technology and how it is the automation of the
orchestration of all the elements of security operations.

SOAR Components
SOAR, or Security Orchestration, Automation, and Response, comprises of three
components: orchestration, automation, and response.

Orchestration
Orchestration
The first component of SOAR is Orchestration, which involves controlling and
activating the security product stack from a central location. SOAR products do
this through playbooks, which are task-based workflows that coordinate across
people, process, and technology.
Automation
Automation
The second component of SOAR is Automation, which is a logical subset of
orchestration. Within SOAR, automation involves finding repeatable tasks and
executing them at machine speed. SOAR products have automation scripts and
extensible product integrations to accomplish this.

Response
Response
The final component, Response, involves maintaining incident oversight as it goes
through the lifecycle. Within SOAR products, this includes case management,
collaboration during investigation, and analysis and reporting after incident
closure.

The Importance of SOAR


SOAR is intended for automated orchestration of the interaction amongst all the
elements and to provide coordination of these interactions. SOAR is critical to the
future of security operations.

Video: Business Value of SOAR


This video explains the importance of SOAR and how SOAR is critical to the future
of security operations.

Elapsed time0:00/Total3:52

SOAR Systems
SOAR systems allow for accelerated incident response through the execution of
standardized and automated playbooks that work upon inputs from security technology
and other data flows. Almost every organization that’s serious about security has a
Security Information and Event Management (SIEM) tool deployed in its environment.

SOAR

SOAR tools ingest aggregated alerts from detection sources (such as SIEMs, network
security tools, and mailboxes) before executing automatable, process-driven
playbooks to enrich and respond to these alerts. The playbooks coordinate across
technologies, security teams, and external users for centralized data visibility
and action. They help accelerate incident response times and increase analyst
productivity. Because playbooks standardize processes that create better
consistency, confidence in the operation of security operations capabilities
improves.

SIEM

SIEM’s collect disparate pieces of data and aggregate them into alerts. SIEM tools
and security orchestration tools have some feature similarities on the surface such
as automation of actions, product integrations, and correlation of data. SIEM tools
monitor various sources for machine data, correlate and aggregate them for context,
and provide real-time detection and monitoring of alerts generated by applications
and network hardware.

Parts of Security Orchestration


Security orchestration is a method of connecting disparate security technologies
through standardized and automatable workflows that enables security teams to
effectively carry out incident response and security operations. There are three
parts to security orchestration: security technologies, workflows/playbooks, and
security teams.

Security Technologies
SOAR tools integrate with the other security and non-security tools that an
organization uses to provide teams with a central console for coordinating and
activating all these tools. These integrations enable inter-product conversations,
data transfer, and remote execution of commands.

Product integrations within security orchestration tools can be either


unidirectional or bidirectional. A unidirectional integration only allows for
transfer of data from the integrated security product to the security orchestration
tool. A bidirectional integration allows for two-way transfer of data.

For example, a security orchestration tool that has a bidirectional integration


with an endpoint tool can ping the endpoint tool for device details, asset data,
infected endpoints, and similar information. The security orchestration tool can
also perform create, read, update, and delete (CRUD) actions on the endpoint tool,
such as quarantining an endpoint or updating an indicator blocklist.

Workflows/Playbooks
Playbooks (runbooks) are task-based graphical workflows that help visualize
processes across security products. These playbooks can be automated, manual, or
both.

Click the tabs to learn about the building blocks that compose playbooks.

Playbook Trigger
Automated Playbook Task
Manual Playbook Tasks
Conditional Tasks
A playbook that is meant to automatically execute within a security orchestration
tool needs a trigger point. This trigger point can be any condition that, when met,
results in the start of the playbook. For example, whenever a phishing email is
ingested from a mailbox into the security orchestration tool, a ‘phishing response’
playbook can be triggered and begin its execution.

Common SOAR Playbooks


The following is the information about the common SOAR playbooks.

Phishing Enrichment and Response


Phishing Enrichment and Response
SOAR phishing playbooks ingest alerts from email inboxes and coordinate actions
across threat intelligence tools, sandboxes, EDR solutions, among others, for
repeatable and accurate response.

Threat Hunting
Threat Hunting
SOAR threat hunting playbooks can be scheduled to run at pre-determined intervals.
They rapidly scan for threats in the environment after ingesting external threat
feeds or following up on existing incidents.
IoC Enrichment
IoC Enrichment
SOAR playbooks can automate enrichment of indicators by querying different threat
intelligence tools for context and presenting the results to analysts, thus saving
time that can be used toward proactive investigation.

Incident Severity Assignment


Incident Severity Assignment
SOAR playbooks can automatically assign severity to incidents by checking
parameters relevant to the organization. Because playbooks reconcile threat scores
from other products, check indicator scores, and verify the criticality of affected
endpoints and users, they ensure that analysts see the incidents that need to be
seen.

Cloud Security Orchestration


Cloud Security Orchestration
SOAR playbooks can coordinate response across cloud and on-premises environments.
For instance, a playbook can execute after ingesting a cloud security alert and
respond by both blocking malicious IoCs on cloud appliances and on firewalls that
are on-premises.

Security Teams
SOAR playbooks enable security teams to effectively carry out incident response and
security operations.

The following describes how playbooks can help security teams.

Manual Tasks

When an action is too unique, nuanced, or infrequent to be automated, security


orchestration playbooks can have manual tasks that act as directives for the SecOps
analyst handling the respective incident.

Task Approval

Even if some actions are prime candidates for automation, they might be too
sensitive to carry out without having a human verify their need and relevance. In
such cases, automated actions can have built-in task approvals. These actions will
wait for the relevant SecOps analyst’s approval before beginning execution.

End-User Engagement

A SOAR tool that has rich integrations with email tools can be used to engage
SecOps analysts and end users within the organization and thus improve overall
process flow.

Security Gaps and Risks


All security challenges cause concern because they present financial and business
risks. Some breaches may not create serious financial risks, but others can be
devastating. Breaches also can cause serious damage to corporate reputation,
especially if sensitive data or internal content is posted online.

Factors That Contribute To Financial Risk


A Ponemon study identified the average cost of a breach can be in the millions. If
regulation fines such as those related to General Data Protection Regulation (GDPR)
in the European Union also are included, the problem for organizations that are
hacked is exacerbated. The following are factors that can contribute to financial
risk:
How Security Orchestration Fills Gaps
There are critical gaps that still exist in security. Security suffers when there
is a lot of data but little follow-up, a lack of product interconnectivity, and a
largely siloed workforce. Security orchestration is well placed to fill these gaps
by leveraging multi-source data ingestion and correlation, an extensible product
integration network, and playbooks and collaboration features that democratize a
security team’s knowledge.

Click the tabs to learn about the benefits of security orchestration.

Accelerates Incident Response

Standardizes and Scales Processes

Unifies Security Infrastructures

Increases Analyst Productivity

Leverages Existing Investments

Improves Overall Security Posture

Course Summary
Now that you've completed this course, you should be able to:

Identify the main components of a SOAR solution

Describe the three parts of security orchestration

Identify security gaps and risks

2
of
3
Introduction
Knowledge Check

You might also like