0% found this document useful (0 votes)
48 views53 pages

CySA+ Module 1.1

Presentation slides CompTIA CySA+ certification module 1.1.

Uploaded by

bernard
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
48 views53 pages

CySA+ Module 1.1

Presentation slides CompTIA CySA+ certification module 1.1.

Uploaded by

bernard
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 53

01

Systems
and
Network
Architectur
In this chapter you will learn:

■ Common network architectures and their security


implications

■ Operating systems

■ Technologies and policies used to identify, authenticate,


and authorize users

■ Public Key Infrastructure

■ Protecting sensitive data


1.1 The Importance of Logging
• Auditing is the process of examining the actions of an entity, such as an individual
user, with the goal of conclusively tying the actions taken in a system, or on a
resource, to that entity.
• Auditing directly supports accountability and nonrepudiation, in that individuals
and entities can be held accountable for their actions, and they cannot dispute that
they took those actions if auditing is properly configured.
• Auditing would not be possible without the process of logging. Logging is the activity
that takes place at the lower level to ensure that we can audit actions.
• Logging is the process of recording these actions, as well as specific data related to
those actions.
Logging Levels
• The logging level refers to how much detail or the extent of information located in a
log entry.
• The typical log events include information such as the following:
 Source IP address or hostname
 Destination IP address or hostname
 Networking port, protocol, and service
 Username
 Domain name
 Nature of the event (account creation, deletion, file modification, and so on)
 Event ID
 Timestamp of the event indicating when it took place
Log Ingestion
Log ingestion is the means and
processes by which logs are
collected, stored, analyzed, and
managed.
Advent of SIEM and SOAR
technologies to reduce analyst
workload.
The default location for logs in the
Linux operating system is the
/var/log directory.
In the Windows environment, you
can use the Event Viewer to view the
event logs, which are stored as .evtx
files in the %SystemRoot%\
System32\Winevt\Logs folder by
default. Other operating systems’ log
Time Synchronization

• Accountability directly depends on auditing, and auditing, in turn, directly depends on


how well the organization logs events.
• Accurate timestamps for logged events are critical for many security processes,
including troubleshooting, and especially incident response and investigating
suspicious events.
• Most modern networks use network-based time sources, rather than relying on the
individual times generated by the host device. A network-based time source maintains
a correct, consistent time for all devices. This is not only critical to logging but also to
some network services, such as Kerberos authentication.
• Note that network time synchronization uses the Network Time Protocol (NTP) over
UDP port 123.
1.2 Operating System Concepts
• Although most people are familiar with the four major variations of operating systems,
such as Windows, UNIX, Linux, and macOS, there are countless other variations, including
embedded and real-time operating systems as well as older operating systems, such as
BSD, OS/2 Warp, and so on.
• Operating System Characteristics:
The OS is in charge of managing all the hardware and software resources on the
system
The OS also provides abstraction layers between the end user and the hardware
The operating system also serves as a mediator for applications installed on the
system to interact with both the user and system hardware.
• At its basic level, the operating system consists of critical core system files and also
provide the ability to execute applications on the system. They also provide the ability for
Windows Registry

• Every single configuration element for the Windows operating system is contained in its
registry. The registry is the central repository database for all configurations settings in
Windows, whether they are simple desktop color preferences or networking and
security configuration items.
• The registry is the most critical portion of the operating system, other than its core
executables.
• It is one of the first places an analyst goes to look for issues, along with the log files, if
Windows is not functioning properly, or if the analyst suspects the operating system
has been compromised.
• The Windows registry is a hierarchical database, which is highly protected from a
security perspective. Only specific programs, processes, and users with high-level
Windows Registry
Hives
The five hives are shown in the
attached image.
Although the registry stores all
configuration details for the Windows
operating system and installed
applications, configuration changes
routinely are not made to the registry
itself. They are usually made through
other configuration utilities that are
part of the operating system and its
applications. For instance, you would
not make changes to group policy
directly in the registry; you would
simply use the group policy editor,
which would update the registry.
Linux Configuration Settings

• Rather than storing configuration settings in a proprietary hierarchical database, as


Windows does, Linux configuration settings are stored in simple text files.
• Most of these configuration text files are stored in the /etc directory and its related
subdirectories. These text files are not integrated, although you will often see
references to other configuration files in them.
• It’s important to note that configuration files in Linux are well protected and only
certain applications, daemons, and users, such as root, have direct access to them.
• On the exam, you may be asked to answer questions regarding scenarios that test your
understanding on the fundamentals of both Windows and Linux configuration settings.
At minimum, you should understand the very basic concepts of configuration settings
such as the registry.
System Hardening
• Systems when shipped from a vendor typically come with default configuration
settings. An example is the common use of the default credentials admin : admin.
• Before a system is put into operation, it should go through a process known as
hardening, which means that the configuration of the system is made more secure
and locked down. This can be done manually or automated using hardening tools or
scripts.
• Typical actions taken for system hardening include the following:
Updates with the most recent operating system and application patches
Unnecessary services turned off
Unnecessary open network ports closed
Password changes for all accounts on the system to more complex passwords
New accounts created with very restrictive privileges
Installation of antimalware, host-based intrusion detection, and EDR software
File Structure
• The file structure of an operating system dictates how files are stored and accessed on
storage media, such as a hard drive. The file structure is operating system dependent.
• Most modern operating systems organize files into hierarchical logical structures,
resembling upside-down trees, where a top-level node in the tree is usually a directory
and is represented as a folder in the GUI.
• Most often files have individual extensions that are not only somewhat descriptive of
their function but also make it easier for the applications that use those files to access
them. Examples: .docx, .pdf, .jpeg, .mkv.
• Windows uses the NTFS file system, proprietary to Microsoft, and most modern Linux
variants use a file system known as ext4.
• Files typically have unique signatures that are easily identifiable using file verification
utilities. This can help a cybersecurity analyst to determine if a file has been
System Processes
• Processes are ongoing activities that execute to carry out a multitude of tasks for
operating systems and applications.
• As a cybersecurity analyst, you should be familiar with the basic processes for
operating systems and applications that you work with on a daily basis.
• Windows processes can be viewed using a utility such as Task Manager.
• In Linux and other UNIX-based systems, processes are referred to as being spawned by
Linux services, or daemons. One of the basic tools for viewing processes in Linux is the
ps command.
• You should understand how to view and interact with processes for both Windows and
Linux for the exam, using Task Manager and the ps command, respectively.
Viewing processes in Linux using the ps
command.

Viewing processes in the Windows Task


Manager
Hardware Architecture
• Hardware architecture refers not only to the logical and physical placement of hardware
in the network architecture design but also the architecture of how the hardware itself
is designed with regard to the CPU, trusted computing base, memory allocation, secure
storage, boot verification processes, and so on.
• Some hardware architectures are better suited from a security perspective than others;
many hardware architectures have security mechanisms built in, such as those that use
Trusted Platform Modules (TPMs) and secure boot mechanisms, for instance.
• The TPM is a secure, tamper-resistant location for storing encryption keys and
performing highly trusted cryptographic operations on Windows and Linux devices.
With Apple, it’s called the Secure Enclave and with Android, it’s called the Android
Knox.
• Secure boot helps/prevents unauthorized OS's (or any software) from booting.
1.3 Network Architecture
• A network architecture refers to the nodes on a computer network and the manner in
which they are connected to one another.
• There is no universal way to depict this, but it is common to, at least, draw the various
subnetworks and the network devices (such as routers, firewalls) that connect them. It
is also better to list the individual devices included in each subnet, at least for valuable
resources such as servers, switches, and network appliances.
• The most mature organizations draw on their asset management systems to provide a
rich amount of detail as well as helpful visualizations.
• It should not just be an exercise in documenting where things are but in placing them
deliberately in certain areas. Secure boot helps/prevents unauthorized OS's (or any
software) from booting.
Hybrid
network
architecture
In most cases, network
architectures are hybrid
constructs
incorporating physical,
software-defined, virtual,
and cloud assets
On-premises Architecture
• The most traditional network architecture is a physical one. In a physical network
architecture, we describe the manner in which physical devices such as workstations,
servers, firewalls, and routers relate to one another.
• Along the way, we decide what traffic is allowed, from where to where, and develop the
policies that will control those flows. These policies are then implemented in the
devices themselves—for example, as firewall rules or access control lists (ACLs) in
routers.
• Most organizations use a physical network architecture, or perhaps more than one.
Network Segmentation
• Network segmentation is the practice of breaking up networks into smaller
subnetworks.
• Segmentation enables network administrators to implement granular controls over the
manner in which traffic is allowed to flow from one subnetwork to another.
• Some of the goals of network segmentation are to thwart an adversary’s efforts,
improve traffic management, and prevent spillover of sensitive data.
• Network segmentation can be implemented through physical or logical means.
• Physical network segmentation uses network devices such as switches and routers.
• Logical segments are implemented through technologies such as virtual local area
networking (VLAN), software-defined networking, end-to-end encryption, and so on.
• The reason why you might want to segment hosts from others on the network is to
protect sensitive data.
Zero Trust
• Zero Trust simply means that hosts, applications, users, and other entities are not
trusted by default, but once they are trusted, through strong identification and
authentication policies and processes, that trust must be periodically reverified.
• Zero Trust is a security concept in which organizations assume that attackers have
already breached their network perimeter defenses and, as a result, do not
automatically trust any user or device that is inside the perimeter. Instead, zero trust
networks require all users and devices to be authenticated and authorized before they
are allowed to access resources on the network.
• Never trust, Always Verify.
• Zero Trust can be implemented through Zero trust network access (ZTNA). In a ZTNA
architecture, access to network resources is controlled through the use of software-
defined perimeters that are created around specific resources. These perimeters are
dynamically established and enforced by a central control plane.
Software-
Defined
Networking
Software-defined networking
(SDN) is a network architecture in
which software applications are
responsible for deciding how best
to route data (the control layer)
and then for actually moving those
packets around (the data layer).
One of the most powerful aspects
of SDN is that it decouples data
forwarding functions (the data
plane) from decision-making
functions (the control plane),
allowing for holistic and adaptive
control of how data moves around
Secure Access
Secure Edge

(SASE)
SASE combines the concepts of
software-defined wide area networking
and zero trust, and the services are
delivered through cloud-based
deployments.
• SASE is identity based; in other words,
access is allowed based on the proper
identification and authentication of both
users and devices.
• Another key characteristic is that SASE
is entirely cloud-based, with both its
infrastructure and security mechanisms
delivered through cloud.
• SASE is designed to be globally
distributed; secure connections are
Cloud Service Models
Cloud computing enables organizations to access on-demand
network, storage, and compute power, usually from a shared pool
of resources.

Infrastructure
Software as a Platform as a as a Service
Service (SaaS) Service (PaaS) (IaaS)
Google Apps, Dropbox, AWS Lambda, Microsoft DigitalOcean, Linode,
Salesforce, Office 365, Azure, Google App Rackspace, AWS, Cisco
iCloud, are all examples Engine, Apache Stratos, Metapod, Microsoft
of SaaS. AWS Elastic Beanstalk, Azure, Google Compute
Heroku. Engine (GCE)
Software as a
Service
• SaaS allows users to connect to and
use cloud-based apps over the
Internet.
• Organizations access applications and
functionality directly from a service
provider with minimal requirements
to develop custom code in-house.
• The vendor provides the service and
all of the supporting technologies
beneath it.
• Any security problems that arise
occur at the data-handling level.
• The most common types of SaaS
vulnerabilities exist in one or more of
three spaces: visibility, management,
Platform as a
Service
• PaaS provides customers a complete
cloud platform for developing, running
and managing applications without the
cost, complexity and inflexibility that
often comes with building and
maintaining that platform on premises.
• PaaS solutions are optimized to provide
value focused on software
development.
• PaaS is designed to provide
organizations with tools that interact
directly with what may be the most
important company asset: its source
code.
• Service Providers assume the
Infrastructure as a
Service
• IaaS is internet access to 'raw' IT
infrastructure—physical servers, virtual
machines, storage, networking and
firewalls—hosted by a cloud provider.
IaaS eliminates cost and the work of
owning, managing and maintaining on-
premises infrastructure.
• The organization provides its own
application platform and applications.
• Remember that SaaS typically only
offers applications, PaaS generally
offers a configured host with the
operating system only, and IaaS usually
offers a base server on which the
organization installs its own operating
Security as a Service
• SECaaS is a cloud-based model for
service delivery by a specialized
security service provider. SECaaS
providers usually offer services such as
authentication, antivirus, intrusion
detection, and security assessments.
• SECaaS serves as an extension of MSSP
capabilities, providing incident
response, investigation, and recovery.
• Examples include; Identity and access
management, Antivirus management,
Data loss prevention (DLP), Continuous
monitoring, Firewall as a Service
(FWaaS), Vulnerability scanning.
Cloud Deployment Models

Public Cloud Private Cloud


Shared infrastructure that Infrastructure is owned and
supports all users who want maintained by a single
to make use of a computing organization. Public cloud
resource. providers can also emulate a
private cloud within a public
Community
cloud (Virtual Private Cloud).
Hybrid Cloud Cloud
An organization makes use Multiple organizations with
of interconnected private a common interest in how
and public cloud data is stored and
infrastructure. processed sharing
computing resources.
Cloud Access Security Broker (CASB)
• A CASB sits between each user and each cloud service, monitoring all activity,
enforcing policies, and alerting you when something seems to be wrong.
• Four pillars of CASBs:
Visibility
Threat Protection
Compliance
Data Security
• A CASB mediates access between internal clients and cloud-based services. It is
normally installed and managed jointly by both the client organization and the CSP.
1.4 Infrastructure Concepts
• Infrastructure is more that networks and networking components. It also the host
devices, their operating systems and applications, management structures, processes,
architectures, and so on.
• A cybersecurity analyst should become intimately familiar with their organization’s
infrastructure.
• They also need to be aware of some of the specific technologies their organization
uses, including serverless architecture, virtualization, containerization, and so on, and
how they interact with the infrastructure.
Virtualization
• Virtualization is technology that you can use to create virtual representations of
servers, storage, networks, and other physical machines. Virtual software mimics the
functions of physical hardware to run multiple virtual machines simultaneously on a
single physical machine.
• Virtualization technologies have vastly reduced the hardware needed to provide a wide
array of service and network functions.
• For the average user, virtualization has proven to be a low-cost way to gain exposure to
new software and training.
Hypervisors
• The use of hypervisors is the most
common method of achieving
virtualization.
• It manages the physical hardware
and performs the functions necessary
to share those resources across
multiple virtual instances.
• Classifications of hypervisors:
 Type 1 (Bare-metal hypervisors)
VMware ESXi, Microsoft Hyper-V,
and Kernelbased Virtual Machine
(KVM).
 Type 2.
Vmware Workstation, Oracle VM
VirtualBox, and Parallels Desktop.
Containerizati

on
A software deployment option that
involves packaging up software code
and its dependencies, so it is easier
to deploy across computing
environments.
• Containerization is simply running a
particular application in a virtualized
space, rather than running the entire
guest operating system.
• Each container operates in a sandbox,
with the only means to interact being
through the user interface or API
calls.
• The benefits of containers include
faster deployment, less overhead,
easier migration, greater scalability,
and more fault tolerance.
Serverless
Architecture
• Serverless computing is a model
where backend services are provided
on an as-used basis.
• Serverless architectures rely on the
concepts of containerization and
virtualization to run small pieces of
microcode in a virtualized
environment to provide very specific
services and functions.
1.5 Identity and Access Management
(IAM)
• IAM is a broad term that encompasses the use of different technologies and policies to
identify, authenticate, and authorize users through automated means.
• Identification describes a method by which a subject (user, program, or process)
claims to have a specific identity (username, account number, or e-mail address)
• Authentication is the process by which a system verifies the identity of the subject,
usually by requiring a piece of information that only the claimed identity should have.
• Authorization is a check against some type of policy to verify that this user has
indeed been authorized to access the requested resource and perform the requested
actions.
Multifactor Authentication (MFA)
• Authentication is still commonly based on credentials consisting of a username and a
password.
• Multifactor authentication is the preferred modern authentication method. MFA just
means that more than one authentication factor is used.
• Most Common authentication factors:
Something you know (knowledge factor), Threat Protection
Something you are (biometric or inherence factor)
Something you have (possession factor)
• Other factors can be included with multifactor authentication to further secure the
process, including temporal factors (such as time of day) and location (such as logical
IP address, hostname, or even geographic location).
• Passwordless authentication is essentially any method of authentication that does not
Single Sign-On
• (SSO)
Single sign-on (SSO) enables users to
authenticate only once and then be able
to access all their authorized resources
regardless of where they are.
• SSO centralizes the authentication
mechanism, that system becomes a
critical asset and thus target for attacks.
Compromise of the SSO system, or loss
of availability, means loss of access to
the entire organization’s suite of
applications that rely on the SSO
system.
• SAML, a widely used method of
implementing SSO, provides access and
authorization decisions using a system
Federation
• Federated identity is the concept of
using a person’s digital identity
credentials to gain access to various
services, often across organizations.
• Federation services can be considered
SSO but applied across multiple
organizations.
• The user’s identity credentials are
provided by a broker known as the
Federated Identity Manager or Identity
Provider (IdP).
• Many popular platforms, such as Google,
Amazon, and Twitter, take advantage of
their large memberships to provide
federated identity services for third-
OpenID
• OpenID is an open standard for user
authentication by third parties.
• Designed with web and mobile
applications in mind.
• SAML is mainly used for Enterprise and
Government applications
• OpenID defines three roles:
End user
Relying party
OpenID provider
• Very commonly used on websites where
the user is asked to authenticate using
IdPs like Google, Twitter, Apple or
Facebook.
Privileged Access Management (PAM)
• As users change roles or move from one department to another, they often are
assigned more and more access rights and permissions. This is commonly referred to
as authorization creep.
• Enforce least privilege on user accounts.
• Because of the power privileged accounts have, they are frequently among the first
targets for an attacker.
• Here are some best practices for managing privileged accounts:
 Minimize the number of privileged accounts.
 Ensure that each administrator has a unique account (that is, no shared accounts).
 Elevate user privileges when necessary, after which the user should return to regular
account privileges.
 Maintain an accurate, up-to-date account inventory.
1.6 Encryption
• Encryption is a method of transforming readable data, called plaintext, into a form
that appears to be random and unreadable, which is called ciphertext.
• Encryption enables the transmission of confidential information over insecure channels
without unauthorized disclosure.
• The science behind encryption and decryption is called cryptography, and a system
that encrypts and/or decrypts data is called a cryptosystem.
• The two main pieces of any cryptosystem are the algorithms and the keys.
• Algorithms used in cryptography are complex mathematical formulas that dictate the
rules of how the plaintext will be turned into ciphertext, and vice versa.
• A key is a string of random bits that will be used by the algorithm to add to the
randomness of the encryption process.
Symmetric
• Encryption
The sender and receiver use the
same key for encryption and
decryption
• Also known as private key
cryptography, it is often used for
high-volume data processing
where speed, efficiency, and
complexity are important.
• Some types of symmetric
encryption algorithms include;
DES, 3DES, AES, Blowfish, Twofish.
• Some drawbacks are:
 if the secret key is
compromised, all messages
ever encrypted with that key
can be decrypted and read by
Asymmetric
• Encryption
Also known as public-key
cryptography, it uses two keys,
designated as a public and
private key.
• The key pairs are mathematically
related to each other in a way that
enables anything that is encrypted
by one to be decrypted by the
other.
• Only the public key is shared with
anyone; the private key is
maintained securely by its owner.
• Solves the key distribution
problem.
• Some types of asymmetric
encryption algorithms include;
Symmetric vs. Asymmetric Cryptography
• Key length and encryption/decryption time.
• Symmetric encryption algorithms are significantly faster than asymmetric ones.
• The synergistic use of both symmetric and asymmetric encryption together is what
makes Public Key Infrastructure, along with digital certificates and digital signatures,
possible.
1.7 Public Key Infrastructure (PKI)
• Use of a web of trust (decentralized), to ascertain identities.
• PKI ensures a formal process for verifying identities.
• Certificate authorities (CAs) verify someone’s identity and then digitally sign that public
key, packaging it into a digital certificate or a public key certificate (X.509).
• A certificate revocation list (CRL), which is maintained by a revocation authority (RA), is
the authoritative reference for certificates that are no longer trustworthy.
• Many organizations disable CRL checks because they can slow down essential business
processes.
Digital
• Signatures
Digital signatures are short
sequences of data that prove that a
larger data sequence (say, an e-mail
message or a file) was created by a
given person and has not been
modified by anyone else after being
signed.
• In practice, digital signatures are
handled by email and other
cryptographic-enabled applications,
so all the hashing and decryption are
done automatically, not by the end
user.
1.8 Sensitive Data Protection
• There are some types of data that need special consideration with regard to storage
and transmission.
• Unauthorized disclosure of the following types of data may have serious, adverse
effects on the associated business, government, or individual.
Personally Identifiable Information /
Personal Health Information
• PII is data that can be used to identify an individual.
• PII is sometimes referred to as sensitive personal information.
• This data requires protection because of the risk of personal harm that could result
from its disclosure, alteration, or destruction.
• PHI is any data that relates to an individual’s past, present, or future physical or mental
health conditions.
• The Health Insurance Portability and Accountability Act of 1996 (HIPAA) is a law that
establishes standards to protect individuals’ personal health information (PHI).
Cardholder Data
• Mandates by the European Union’s General Data Protection Regulation (GDPR), for
example, have introduced a sweeping number of protections for the handling of
personal data, which includes financial information.
• The Gramm-Leach-Bliley Act (GLBA) of 1999, for example, covers all US-regulated
financial services corporations.
• The Federal Trade Commission’s Financial Privacy Rule governs the collection of
customers’ personal financial information and identifies requirements regarding privacy
disclosure on a recurring basis.
• PCI DSS is an example of an industry policing itself and was created by the major credit
card companies such as Visa, MasterCard, and so on, to reduce credit card fraud and
protect cardholder information.
• You should implement security mechanisms that meet regulatory requirements for
Data Loss Prevention
• Data loss prevention (DLP) comprises the actions that organizations take to prevent
unauthorized external parties from gaining access to sensitive data.
• Many DLP solutions work in a similar fashion to IDSs by inspecting the type of traffic
moving across the network, attempting to classify it, and making a go or no-go decision
based on the aggregate of signals.
• Some SaaS platforms feature DLP solutions that can be enabled to help your
organization comply with business standards and industry regulations.
• Data loss prevention is both a security management activity and a collection of
technologies.
Data Inventories
• Data loss prevention (DLP) comprises the actions that organizations take to prevent
unauthorized external parties from gaining access to sensitive data.
• Many DLP solutions work in a similar fashion to IDSs by inspecting the type of traffic
moving across the network, attempting to classify it, and making a go or no-go decision
based on the aggregate of signals.
• Some SaaS platforms feature DLP solutions that can be enabled to help your
organization comply with business standards and industry regulations.
• Data loss prevention is both a security management activity and a collection of
technologies.
• A good first step is to find and characterize all the data in your organization before you
even look at DLP solutions.
• Understanding data flows at the intersection between business and IT is critical to
Implementing DLPs
• Network DLP (NDLP) applies data protection policies to data in motion. NDLP
products are normally implemented as appliances that are deployed at the perimeter of
an organization’s networks.
• Endpoint DLP (EDLP) applies protection policies to data at rest and data in use. EDLP
is implemented in software running on each protected endpoint.
• EDLP provides a degree of protection that is normally not possible with NDLP. The
reason is that the data is observable at the point of creation.
• Another approach to DLP is to deploy both NDLP and EDLP across the enterprise.
Obviously, this approach is the costliest and most complex. For organizations that can
afford it, however, it offers the best coverage.
• Most modern DLP solutions offer both network and endpoint DLP components, making
them hybrid solutions.
Secure Sockets Layer and Transport Layer
Security Inspection
• The process of breaking certificate-based network encryption as SSL inspection.
• SSL/TLS inspection is the process of interrupting encrypted session between an end
user and a secure web-based server for the purposes of breaking data encryption and
inspecting the contents of the message traffic that is transmitted and received between
the user and the web server.
• The primary reason organizations may engage in SSL/TLS inspection is to prevent
sensitive data loss or exfiltration.
• In the practical world, SSL has been deprecated and is no longer considered secure. TLS
is the preferred replacement for SSL applications.
• Most modern DLP solutions offer both network and endpoint DLP components, making
them hybrid solutions.

You might also like