Isc Chapter 1
Isc Chapter 1
Isc Chapter 1
Introduction
Types of access control, physical and logical controls and how they are combined to
strengthen the overall security of an organization.
Controls Overview
Access controls are not just about restricting access to information systems and
data, but also about allowing access. It is about granting the appropriate level of
access to authorized personnel and processes and denying access to unauthorized
functions or individuals.
subjects: any entity that requests access to our assets. The entity requesting
access may be a user, a client, a process or a program, for example. A subject
is the initiator of a request for service; therefore, a subject is referred to as
“active.” A subject:
o Is a user, a process, a procedure, a client (or a server), a program, a
device such as an endpoint, workstation, smartphone or removable
storage device with onboard firmware.
o Is active: It initiates a request for access to resources or services.
o Requests a service from an object.
o Should have a level of clearance (permissions) that relates to its ability to
successfully access services or resources.
Controls Assessments
Risk reduction depends on the effectiveness of the control. It must apply to the current
situation and adapt to a changing environment.
Defense in Depth
We are looking at all access permissions including building access, access to server
rooms, access to networks and applications and utilities. These are all implementations
of access control and are part of a layered defense strategy, also known as defense
in depth, developed by an organization.
Another example of multiple technical layers is when additional firewalls are used to
separate untrusted networks with differing security requirements, such as the internet
from trusted networks that house servers with sensitive data in the organization. When
a company has information at multiple sensitivity levels, it might require the network
traffic to be validated by rules on more than one firewall, with the most sensitive
information being stored behind multiple firewalls.
For a non-technical example, consider the multiple layers of access required to get to
the actual data in a data center. First, a lock on the door provides a physical barrier to
access the data storage devices. Second, a technical access rule prevents access to
the data via the network. Finally, a policy, or administrative control defines the rules that
assign access to authorized individuals.
For example, only individuals working in billing will be allowed to view consumer
financial data, and even fewer individuals will have the authority to change or delete that
data. This maintains confidentiality and integrity while also allowing availability by
providing administrative access with an appropriate password or sign-on that proves the
user has the appropriate permissions to access that data.
Systems often monitor access to private information, and if logs indicate that someone
has attempted to access a database without the proper permissions, that will
automatically trigger an alarm. The security administrator will then record the incident
and alert the appropriate people to take action.
The more critical information a person has access to, the greater the security should be
around that access. They should definitely have multi-factor authentication, for instance.
Privileged access management provides the first and perhaps most familiar use case.
Consider a human user identity that is granted various create, read, update, and delete
privileges on a database. Without privileged access management, the system’s access
control would have those privileges assigned to the administrative user in a static way,
effectively “on” 24 hours a day, every day. Security would be dependent upon the login
process to prevent misuse of that identity. Just-in-time privileged access management,
by contrast, includes role-based specific subsets of privileges that only become active in
real time when the identity is requesting the use of a resource or service.
Privileged Accounts
Privileged accounts are those with permissions beyond those of normal users, such as
managers and administrators. Broadly speaking, these accounts have elevated
privileges and are used by many different classes of users, including:
Typical measures used for moderating the potential for elevated risks from misuse or
abuse of privileged accounts include the following:
* More extensive and detailed logging than regular user accounts. The record of
privileged actions is vitally important, as both a deterrent (for privileged account holders
that might be tempted to engage in untoward activity) and an administrative control (the
logs can be audited and reviewed to detect and respond to malicious activity).
* More stringent access control than regular user accounts. As we will see emphasized
in this course, even nonprivileged users should be required to use MFA methods to gain
access to organizational systems and networks. Privileged users—or more accurately,
highly trusted users with access to privileged accounts—should be required to go
through additional or more rigorous authentication prior to those privileges. Just-in-time
identity should also be considered as a way to restrict the use of these privileges to
specific tasks and the times in which the user is executing them.
* Deeper trust verification than regular user accounts. Privileged account holders should
be subject to more detailed background checks, stricter nondisclosure agreements and
acceptable use policies, and be willing to be subject to financial investigation. Periodic
or event-triggered updates to these background checks may also be in order,
depending on the nature of the organization’s activities and the risks it faces.
* More auditing than regular user accounts. Privileged account activity should be
monitored and audited at a greater rate and extent than regular usage.
Segregation of Duties
The two-person rule is a security strategy that requires a minimum of two people
to be in an area together, making it impossible for a person to be in the area
alone. Many access control systems prevent an individual cardholder from entering a
selected high-security area unless accompanied by at least one other person. Use of
the two-person rule can help reduce insider threats to critical areas by requiring at least
two individuals to be present at any time. It is also used for life safety within a security
area; if one person has a medical emergency, there will be assistance present.
Other situations that call for provisioning new user accounts or changing privileges
include:
A new employee: When a new employee is hired, the hiring manager sends a
request to the security administrator to create a new user ID. This request
authorizes creation of the new ID and provides instructions on appropriate
access levels. Additional authorization may be required by company policy for
elevated permissions.
Change of position: When an employee has been promoted, their permissions
and access rights might change as defined by the new role, which will dictate any
added privileges and updates to access. At the same time, any access that is no
longer needed in the new job will be removed.
Separation of employment: When employees leave the company, depending
on company policy and procedures, their accounts must be disabled after the
termination date and time. It is recommended that accounts be disabled for a
period before they are deleted to preserve the integrity of any audit trails or files
that may be owned by the user. Since the account will no longer be used, it
should be removed from any security roles or additional access profiles. This
protects the company, so the separated employee is unable to access company
data after separation, and it also protects them because their account cannot be
used by others to access data.
Module 2: Understand Physical Access Controls
Physical access controls are items you can physically touch, which include physical
mechanisms deployed to prevent, monitor, or detect direct contact with systems or
areas within a facility. Examples of physical access controls include security guards,
fences, motion detectors, locked doors/gates, sealed windows, lights, cable protection,
laptop locks, badges, swipe cards, guard dogs, cameras, mantraps/turnstiles, and
alarms.
Physical access controls are necessary to protect the assets of a company, including its
most important asset, people. When considering physical access controls, the security
of the personnel always comes first, followed by securing other physical assets.
Physical security controls for human traffic are often done with technologies such as
turnstiles, mantraps and remotely or system-controlled door locks. For the system to
identify an authorized employee, an access control system needs to have some form of
enrollment station used to assign and activate an access control device. Most often, a
badge is produced and issued with the employee’s identifiers, with the enrollment
station giving the employee specific areas that will be accessible. In high-security
environments, enrollment may also include biometric characteristics. In general, an
access control system compares an individual’s badge against a verified database. If
authenticated, the access control system sends output signals allowing authorized
personnel to pass through a gate or a door to a controlled area. The systems are
typically integrated with the organization’s logging systems to document access activity
(authorized and unauthorized)
A range of card types allow the system to be used in a variety of environments. These
cards include: Bar code, Magnetic stripe, Proximity, Smart, Hybrid
Environmental Design
CPTED provides direction to solve the challenges of crime with organizational (people),
mechanical (technology and hardware) and natural design (architectural and circulation
flow) methods. By directing the flow of people, using passive techniques to signal who
should and should not be in a space and providing visibility to otherwise hidden spaces,
the likelihood that someone will commit a crime in that area decreases.
Biometrics
Even though the biometric data may not be secret, it is personally identifiable
information, and the protocol should not reveal it without the user’s consent. Biometrics
takes two primary forms, physiological and behavioral.
Monitoring
The use of physical access controls and monitoring personnel and equipment entering
and leaving as well as auditing/logging all physical events are primary elements in
maintaining overall organizational security.
Cameras
Cameras are normally integrated into the overall security program and centrally
monitored. Cameras provide a flexible method of surveillance and monitoring. They can
be a deterrent to criminal activity, can detect activities if combined with other sensors
and, if recorded, can provide evidence after the activity They are often used in locations
where access is difficult or there is a need for a forensic record.While cameras provide
one tool for monitoring the external perimeter of facilities, other technologies augment
their detection capabilities. A variety of motion sensor technologies can be effective in
exterior locations. These include infrared, microwave and lasers trained on tuned
receivers. Other sensors can be integrated into doors, gates and turnstiles, and strain-
sensitive cables and other vibration sensors can detect if someone attempts to scale a
fence. Proper integration of exterior or perimeter sensors will alert an organization to
any intruders attempting to gain access across open space or attempting to breach the
fence line.
Logs
In this section, we are concentrating on the use of physical logs, such as a sign-in sheet
maintained by a security guard, or even a log created by an electronic system that
manages physical access. Electronic systems that capture system and security logs
within software will be covered in another section.
A log is a record of events that have occurred. Physical security logs are essential to
support business requirements. They should capture and retain information as long as
necessary for legal or business reasons. Because logs may be needed to prove
compliance with regulations and assist in a forensic investigation, the logs must be
protected from manipulation. Logs may also contain sensitive data about customers or
users and should be protected from unauthorized disclosure.
The organization should have a policy to review logs regularly as part of their
organization’s security program. As part of the organization’s log processes, guidelines
for log retention must be established and followed. If the organizational policy states to
retain standard log files for only six months, that is all the organization should have.
A log anomaly is anything out of the ordinary. Identifying log anomalies is often the first
step in identifying security-related issues, both during an audit and during routine
monitoring. Some anomalies will be glaringly obvious: for example, gaps in date/time
stamps or account lockouts. Others will be harder to detect, such as someone trying to
write data to a protected directory. Although it may seem that logging everything so you
would not miss any important data is the best approach, most organizations would soon
drown under the amount of data collected.
Business and legal requirements for log retention will vary among economies, countries
and industries. Some businesses will have no requirements for data retention. Others
are mandated by the nature of their business or by business partners to comply with
certain retention data. For example, the Payment Card Industry Data Security Standard
(PCI DSS) requires that businesses retain one year of log data in support of PCI. Some
federal regulations include requirements for data retention as well.
If a business has no business or legal requirements to retain log data, how long should
the organization keep it? The first people to ask should be the legal department. Most
legal departments have very specific guidelines for data retention, and those guidelines
may drive the log retention policy.
Security Guards
Security guards are an effective physical security control. No matter what form of
physical access control is used, a security guard or other monitoring system will
discourage a person from masquerading as someone else or following closely on the
heels of another to gain access. This helps prevent theft and abuse of equipment or
information.
Alarm Systems
Alarm systems are commonly found on doors and windows in homes and office
buildings. In their simplest form, they are designed to alert the appropriate personnel
when a door or window is opened unexpectedly.
For example, an employee may enter a code and/or swipe a badge to open a door, and
that action would not trigger an alarm. Alternatively, if that same door was opened by
brute force without someone entering the correct code or using an authorized badge, an
alarm would be activated.
Another alarm system is a fire alarm, which may be activated by heat or smoke at a
sensor and will likely sound an audible warning to protect human lives in the vicinity. It
will likely also contact local response personnel as well as the closest fire department.
Finally, another common type of alarm system is in the form of a panic button. Once
activated, a panic button will alert the appropriate police or security personnel.
Whereas physical access controls are tangible methods or mechanisms that limit
someone from getting access to an area or asset, logical access controls are electronic
methods that limit someone from getting access to systems, and sometimes even to
tangible assets or areas. Types of logical access controls include:
Passwords
Biometrics (implemented on a system, such as a smartphone or laptop)
Badge/token readers connected to a system
These types of electronic tools limit who can get logical access to an asset, even if the
person already has physical access.
Discretionary access control (DAC) is a specific type of access control policy that
is enforced over all subjects and objects in an information system. In DAC, the
policy specifies that a subject who has been granted access to information can do
one or more of the following:
Most information systems in the world are DAC systems. In a DAC system, a user
who has access to a file is usually able to share that file with or pass it to someone else.
This grants the user almost the same level of access as the original owner of the
file. Rule-based access control systems are usually a form of DAC.
This methodology relies on the discretion of the owner of the access control object to
determine the access control subject’s specific rights. Hence, security of the object is
literally up to the discretion of the object owner. DACs are not very scalable; they rely on
the access control decisions made by each individual object owner, and it can be
difficult to find the source of access control issues when problems occur.
A mandatory access control (MAC) policy is one that is uniformly enforced across all
subjects and objects within the boundary of an information system. In simplest
terms, this means that only properly designated security administrators, as
trusted subjects, can modify any of the security rules that are established for
subjects and objects within the system. This also means that for all subjects defined
by the organization (that is, known to its integrated identity management and access
control system), the organization assigns a subset of total privileges for a subset of
objects, such that the subject is constrained from doing any of the following:
Although MAC sounds very similar to DAC, the primary difference is who can control
access. With Mandatory Access Control, it is mandatory for security administrators
to assign access rights or permissions; with Discretionary Access Control, it is
up to the object owner’s discretion.
Role-based access control (RBAC), as the name suggests, sets up user permissions
based on roles. Each role represents users with similar or identical permissions.
Role-based access control provides each worker privileges based on what role they
have in the organization. Only Human Resources
staff have access to personnel files, for example; only Finance has access to bank
accounts; each manager has access to their own direct reports and their own
department. Very high-level system administrators may have access to everything; new
employees would have very limited access, the minimum required to do their jobs.
Having multiple roles with different combinations of permissions can require close
monitoring to make sure everyone has the access they need to do their jobs and
nothing more. In this world where jobs are ever-changing, this can sometimes be a
challenge to keep track of, especially with extremely granular roles and permissions.
Upon hiring or changing roles, a best practice is to not copy user profiles to new users.
It is recommended that standard roles are established, and new users are created
based on those standards rather than an actual user. That way, new employees start
with the appropriate roles and permissions.
Introduction
When we're talking about IR, BC and DR, we're focus on availability, which is
accomplished through those concepts.
Incident Terminology
Along with the organizational need to establish a Security Operations Center (SOC) is
the need to create a suitable incident response team. A typical incident response
team is a cross-functional group of individuals who represent the management,
technical and functional areas of responsibility most directly impacted by a security
incident. Potential team members include the following:
Team members should have training on incident response and the organization’s
incident response plan. Typically, team members assist with investigating the
incident, assessing the damage, collecting evidence, reporting the incident and
initiating recovery procedures. They would also participate in the remediation and
lessons learned stages and help with root cause analysis.
Many organizations now have a dedicated team responsible for investigating any
computer security incidents that take place. These teams are commonly known as
computer incident response teams (CIRTs) or computer security incident response
teams (CSIRTs). When an incident occurs, the response team has four primary
responsibilities:
List of the BCP team members, including multiple contact methods and backup
members
Immediate response procedures and checklists (security and safety procedures,
fire suppression procedures, notification of appropriate emergency-response
agencies, etc.)
Notification systems and call trees for alerting personnel that the BCP is being
enacted
Guidance for management, including designation of authority for specific
managers
How/when to enact the plan. It's important to include when and how the plan will
be used.
Contact numbers for critical members of the supply chain (vendors, customers,
possible external emergency providers, third-party partners)
How often should an organization test its business continuity plan (BCP)?
Routinely. Each individual organization must determine how often to test its BCP, but it
should be tested at predefined intervals as well as when significant changes happen
within the business environment.
L4 Network Security
What is Networking
A network is simply two or more computers linked together to share data, information or
resources.
Types of Networks
Local area network (LAN) - A local area network (LAN) is a network typically
spanning a single floor or building. This is commonly a limited geographical area.
Wide area network (WAN) - Wide area network (WAN) is the term usually
assigned to the long-distance connections between geographically remote
networks.
Network Devices
Networking Models
Many different models, architectures and standards exist that provide ways to
interconnect different hardware and software systems with each other for the purposes
of sharing information, coordinating their activities and accomplishing joint or shared
tasks.
Computers and networks emerge from the integration of communication devices,
storage devices, processing devices, security devices, input devices, output devices,
operating systems, software, services, data and people.
Translating the organization’s security needs into safe, reliable and effective network
systems needs to start with a simple premise. The purpose of all communications is to
exchange information and ideas between people and organizations so that they can get
work done.
Those simple goals can be re-expressed in network (and security) terms such as:
In the most basic form, a network model has at least two layers:
The OSI Model was developed to establish a common way to describe the
communication structure for interconnected computer systems. The OSI model serves
as an abstract framework, or theoretical model, for how protocols should function in an
ideal world, on ideal hardware. Thus, the OSI model has become a common conceptual
reference that is used to understand the communication of various hierarchical
components from software interfaces to physical hardware.
The OSI model divides networking tasks into seven distinct layers. Each layer is
responsible for performing specific tasks or operations with the goal of supporting data
exchange (in other words, network communication) between two computers. The layers
are interchangeably referenced by name or layer number. For example, Layer 3 is also
known as the Network Layer. The layers are ordered specifically to indicate how
information flows through the various levels of communication. Each layer
communicates directly with the layer above and the layer below it. For example, Layer 3
communicates with both the Data Link (2) and Transport (4) layers.
The Application, Presentation, and Session Layers (5-7) are commonly referred to
simply as data. However, each layer has the potential to perform encapsulation
(enforcement of data hiding and code hiding during all phases of software development
and operational use. Bundling together data and methods is the process of
encapsulation; its opposite process may be called unpacking, revealing, or using other
terms. Also used to refer to taking any set of data and packaging it or hiding it in another
data structure, as is common in network protocols and encryption.). Encapsulation is the
addition of header and possibly a footer (trailer) data by a protocol used at that layer of
the OSI model. Encapsulation is particularly important when discussing Transport,
Network and Data Link layers (2-4), which all generally include some form of header. At
the Physical Layer (1), the data unit is converted into binary, i.e., 01010111, and sent
across physical wires such as an ethernet cable.
It's worth mapping some common networking terminology to the OSI Model so you can
see the value in the conceptual model.
When someone references an image file like a JPEG or PNG, we are talking
about the Presentation Layer (6).
When discussing logical ports such as NetBIOS, we are discussing the Session
Layer (5).
When discussing TCP/UDP, we are discussing the Transport Layer (4).
When discussing routers sending packets, we are discussing the Network Layer
(3).
When discussing switches, bridges or WAPs sending frames, we are discussing
the Data Link Layer (2).
Encapsulation occurs as the data moves down the OSI model from Application to
Physical. As data is encapsulated at each descending layer, the previous layer’s
header, payload and footer are all treated as the next layer’s payload. The data unit size
increases as we move down the conceptual model and the contents continue to
encapsulate.
The inverse action occurs as data moves up the OSI model layers from Physical to
Application. This process is known as de-encapsulation (or decapsulation). The header
and footer are used to properly interpret the data payload and are then discarded. As
we move up the OSI model, the data unit becomes smaller. The encapsulation/de-
encapsulation process is best depicted visually below:
7 Application DATA
Header --
6 Presentation
>
5 Session
4 Transport
3 Network
2 Data Link
1 Physical
The OSI model wasn’t the first or only attempt to streamline networking protocols or
establish a common communications standard. In fact, the most widely used protocol
today, TCP/IP, was developed in the early 1970s. The OSI model was not developed
until the late 1970s. The TCP/IP protocol stack focuses on the core functions of
networking.
Network Interface
How data moves through the network
Layer
The most widely used protocol suite is TCP/IP, but it is not just a single protocol; rather,
it is a protocol stack comprising dozens of individual protocols. TCP/IP is a platform-
independent protocol based on open standards. However, this is both a benefit and a
drawback. TCP/IP can be found in just about every available operating system, but it
consumes a significant amount of resources and is relatively easy to hack into because
it was designed for ease of use rather than for security.
At the Application Layer, TCP/IP protocols include Telnet, File Transfer Protocol (FTP),
Simple Mail Transport Protocol (SMTP), and Domain Name Service (DNS). The two
primary Transport Layer protocols of TCP/IP are TCP and UDP. TCP is a full-duplex
connection-oriented protocol, whereas UDP is a simplex connectionless protocol.
In the Internet Layer, Internet Control Message Protocol (ICMP) is used to determine
the health of a network or a specific link. ICMP is utilized by ping, traceroute and
other network management tools. The ping utility employs ICMP echo packets and
bounces them off remote systems. Thus, you can use ping to determine whether the
remote system is online, whether the remote system is responding promptly, whether
the intermediary systems are supporting communications, and the level of performance
efficiency at which the intermediary systems are communicating.
Base concepts
IPv4 provides a 32-bit address space. IPv6 provides a 128-bit address space. The first
one is exhausted nowadays, but it is still used because of the NAT technology. 32 bits
means 4 octets of 8 bits, which is represented in a dotted decimal notation such as
192.168.0.1, which means in binary notation 11000000 10101000 00000000 00000001
The nature of the addressing scheme established by IPv4 meant that network designers
had to start thinking in terms of IP address reuse. IPv4 facilitated this in several ways,
such as its creation of the private address groups; this allows every LAN in every SOHO
(small office, home office) situation to use addresses such as 192.168.2.xxx for its
internal network addresses, without fear that some other system can intercept traffic on
their LAN. This table shows the private addresses available for anyone to use:
RANGE
10.0.0.0 to 10.255.255.254
172.16.0.0 to 172.31.255.254
RANGE
192.168.0.0 to 192.168.255.254
The first octet of 127 is reserved for a computer’s loopback address. Usually, the
address 127.0.0.1 is used. The loopback address is used to provide a mechanism
for self-diagnosis and troubleshooting at the machine level. This mechanism allows
a network administrator to treat a local machine as if it were a remote machine and ping
the network interface to establish whether it is operational.
* A much larger address field: IPv6 addresses are **128 bits**, which supports
2128 or 340,282,366,920,938,463,463,374,607,431,768,211,456 hosts. **This ensures
that we will not run out of addresses**.
* Improved security:** IPsec is an optional part of IPv4 networks, but a mandatory
component of IPv6 networks**. This will help ensure the integrity and confidentiality of IP
packets and allow communicating partners **to authenticate with each other**.
* Improved quality of service (QoS): This will help services obtain an appropriate share of
a network’s bandwidth.
An IPv6 address is shown as 8 groups of four digits. Instead of numeric (0-9) digits
like IPv4, IPv6 addresses use the hexadecimal range (0000-ffff) and are separated
by colons (:) rather than periods (.). An example IPv6 address
is 2001:0db8:0000:0000:0000:ffff:0000:0001. To make it easier for humans to read
and type, it can be shortened by removing the leading zeros at the beginning of each
field and substituting two colons (::) for the longest consecutive zero fields. All fields
must retain at least one digit. After shortening, the example address above is rendered
as 2001:db8::ffff:0:1, which is much easier to type. As in IPv4, there are some
addresses and ranges that are reserved for special uses:
* ::1 is the local loopback address, used the same as 127.0.0.1 in IPv4.
* The range 2001:db8:: to 2001:db8:ffff:ffff:ffff:ffff:ffff:ffff is reserved for documentation
use, just like in the examples above.
* **fc00**:: to **fdff**:ffff:ffff:ffff:ffff:ffff:ffff:ffff are addresses reserved for internal network
use and are not routable on the internet.
What is WiFi?
In a LAN, threat actors need to enter the physical space or immediate vicinity of the
physical media itself. For wired networks, this can be done by placing sniffer taps onto
cables, plugging in USB devices, or using other tools that require physical access to the
network. By contrast, wireless media intrusions can happen at a distance.
Physical Ports: Physical ports are the ports on the routers, switches, servers,
computers, etc. that you connect the wires, e.g., fiber optic cables, Cat5 cables,
etc., to create a network.
Logical Ports: When a communication connection is established between two
systems, it is done using ports. A logical port (also called a socket) is little more
than an address number that both ends of the communication link agree to use
when transferring data. Ports allow a single IP address to be able to support
multiple simultaneous communications, each using a different port number. In the
Application Layer of the TCP/IP model (which includes the Session,
Presentation, and Application Layers of the OSI model) reside numerous
application- or service-specific protocols. Data types are mapped using port
numbers associated with services. For example, web traffic (or HTTP) is port 80.
Secure web traffic (or HTTPS) is port 443. Table 5.4 highlights some of these
protocols and their customary or assigned ports. You’ll note that in several cases
a service (or protocol) may have two ports assigned, one secure and one
insecure. When in doubt, systems should be implemented using the most secure
version as possible of a protocol and its services.
o Well-known ports (0–1023): These ports are related to the common
protocols that are at the core of the Transport Control Protocol/Internet
Protocol (TCP/IP) model, Domain Name Service (DNS), Simple Mail
Transfer Protocol (SMTP), etc.
o Registered ports (1024–49151): These ports are often associated with
proprietary applications from vendors and developers. While they are
officially approved by the Internet Assigned Numbers Authority (IANA), in
practice many vendors simply implement a port of their choosing.
Examples include Remote Authentication Dial-In User Service (RADIUS)
authentication (1812), Microsoft SQL Server (1433/1434) and the Docker
REST API (2375/2376).
o Dynamic or private ports (49152–65535): Whenever a service is
requested that is associated with well-known or registered ports, those
services will respond with a dynamic port that is used for that session and
then released.
Secure Ports
Some network protocols transmit information in clear text, meaning it is not encrypted
and should not be used. Clear text information is subject to network sniffing. This tactic
uses software to inspect packets of data as they travel across the network and extract
text such as usernames and passwords. Network sniffing could also reveal the content
of documents and other files if they are sent via insecure protocols. The table below
shows some of the insecure protocols along with recommended secure alternatives.
Secure
Insecure
Description Protocol Alternative Protocol
Port
Port
secure alternative is
to use port 587 for
SMTP using Transport
Layer Security (TLS)
which will encrypt the
data between the mail
client and the mail
server
internet. Information
sent via HTTP is not
encrypted and is
susceptible to sniffing
attacks. HTTPS using
TLS encryption is
preferred, as it
protects the data in
transit between the
server and the
browser. Note that
this is often notated
as SSL/TLS. Secure
Sockets Layer (SSL)
has been
compromised is no
longer considered
secure. It is now
recommended for
web servers and
clients to use
Transport Layer
Security (TLS) 1.3 or
higher for the best
protection
SSL/TLS security to
encrypt the data
between the mail
client and the mail
server
secured or not
email or usernames
for logins. The LDAP
protocol also allows
records in the
directory to be
updated, introducing
additional risk. Since
LDAP is not
encrypted, it is
susceptible to sniffing
and manipulation
attacks. Lightweight
Directory Access
Protocol Secure
(LDAPS) adds
SSL/TLS security to
protect the
information while it is
in transit
Types of Threats
Spoofing: an attack with the goal of gaining access to a target system through
the use of a falsified identity. Spoofing can be used against IP addresses,
MAC address, usernames, system names, wireless network SSIDs, email
addresses, and many other types of logical identification.
Phising: an attack that attempts to misdirect legitimate users to malicious
websites through the abuse of URLs or hyperlinks in emails could be
considered phishing.
DoS/DDoS: a denial-of-service (DoS) attack is a network resource consumption
attack that has the primary goal of preventing legitimate activity on a
victimized system. Attacks involving numerous unsuspecting secondary victim
systems are known as distributed denial-of-service (DDoS) attacks.
Virus: The computer virus is perhaps the earliest form of malicious code to
plague security administrators. As with biological viruses, computer viruses
have two main functions—propagation and destruction. A virus is a self-
replicating piece of code that spreads without the consent of a user, but
frequently with their assistance (a user has to click on a link or open a file).
Worm: Worms pose a significant risk to network security. They contain the
same destructive potential as other malicious code objects with an added twist—
they propagate themselves without requiring any human intervention.
Trojan: the Trojan is a software program that appears benevolent but carries a
malicious, behind-the-scenes payload that has the potential to wreak havoc on a
system or network. For example, ransomware often uses a Trojan to infect a
target machine and then uses encryption technology to encrypt documents,
spreadsheets and other files stored on the system with a key known only to the
malware creator.
On-path attack: In an on-path attack, attackers place themselves between two
devices, often between a web browser and a web server, to intercept or modify
information that is intended for one or both of the endpoints. On-path
attacks are also known as man-in-the-middle (MITM) attacks.
Side-channel: A side-channel attack is a passive, noninvasive
attack to observe the operation of a device. Methods include power
monitoring, timing and fault analysis attacks.
Advanced Persistent Threat: Advanced persistent threat (APT) refers to threats
that demonstrate an unusually high level of technical and operational
sophistication spanning months or even years. APT attacks are often
conducted by highly organized groups of attackers.
Insider Threat: Insider threats are threats that arise from individuals who are
trusted by the organization. These could be disgruntled employees or
employees involved in espionage. Insider threats are not always willing
participants. A trusted user who falls victim to a scam could be an unwilling
insider threat.
Malware: A program that is inserted into a system, usually covertly, with the
intent of compromising the confidentiality, integrity or availability of the
victim’s data, applications or operating system or otherwise annoying or
disrupting the victim.
Ransomware: Malware used for the purpose of facilitating a ransom attack.
Ransomware attacks often use cryptography to “lock” the files on an affected
computer and require the payment of a ransom fee in return for the “unlock”
code.
Here are some examples of steps that can be taken to protect networks.
If a system doesn’t need a service or protocol, it should not be running. Attackers
cannot exploit a vulnerability in a service or protocol that isn’t running on a
system.
Firewalls can prevent many different types of attacks. Network-based firewalls
protect entire networks, and host-based firewalls protect individual systems.
Preventing Threats
Scans: Regular vulnerability and port scans are a good way to evaluate the
effectiveness of security controls used within an organization. They may reveal areas
where patches or security settings are insufficient, where new vulnerabilities have
developed or become exposed, and where security policies are either ineffective or not
being followed. Attackers can exploit any of these vulnerabilities.
Firewalls: Early computer security engineers borrowed that name for the devices and
services that isolate network segments from each other, as a security measure. As a
result, firewalling refers to the process of designing, using or operating different
processes in ways that isolate high-risk activities from lower-risk ones. Firewalls
enforce policies by filtering network traffic based on a set of rules. While a firewall
should always be placed at internet gateways, other internal network considerations and
conditions determine where a firewall would be employed, such as network zoning or
segregation of different levels of sensitivity. Firewalls have rapidly evolved over time to
provide enhanced security capabilities. It integrates a variety of threat management
capabilities into a single framework, including proxy services, intrusion
prevention services (IPS) and tight integration with the identity and access
management (IAM) environment to ensure only authorized users are permitted to
pass traffic across the infrastructure. While firewalls can manage traffic at Layers 2
(MAC addresses), 3 (IP ranges) and 7 (application programming interface
(API) and application firewalls), the traditional implementation has been to control
traffic at Layer 4. Traditional firewalls have PORTS IP Address, IDS/IPS, Antivirus
Gateway, WebProxy, VPN; NG Firewalls have PORTS IP Address, IAM Attributes,
IDS/IPS, WebProxy, Anti-Bot, Antivirus Gateway, VPN, FaaS.
Intrusion Prevention System (IPS): An intrusion prevention system (IPS) is a special
type of active IDS that automatically attempts to detect and block attacks before
they reach target systems. A distinguishing difference between an IDS and an IPS is
that the IPS is placed in line with the traffic. In other words, all traffic must pass
through the IPS and the IPS can choose what traffic to forward and what traffic to
block after analyzing it. This allows the IPS to prevent an attack from reaching a
target. Since IPS systems are most effective at preventing network-based attacks, it is
common to see the IPS function integrated into firewalls. Just like IDS, there are
Network-based IPS (NIPS) and Host-based IPS (HIPS).
When it comes to data centers, there are two primary options: organizations
can outsource the data center or own the data center. If the data center is owned, it
will likely be built on premises. A place, like a building for the data center is needed,
along with power, HVAC, fire suppression and redundancy.
Redundancy
If the organization requires full redundancy, devices should have two power supplies
connected to diverse power sources. Those power sources would be backed up by
batteries and generators. In a high-availability environment, even generators would be
redundant and fed by different fuel types.
The service level agreement goes down to the granular level. For example, if I'm
outsourcing the IT services, then I will need to have two full-time technicians readily
available, at least from Monday through Friday from eight to five. With cloud computing,
I need to have access to the information in my backup systems within 10 minutes. An
SLA specifies the more intricate aspects of the services.
Cloud
Cloud Characteristics
Cloud-based assets include any resources that an organization accesses using cloud
computing. Cloud computing refers to on-demand access to computing resources
available from almost anywhere, and cloud computing resources are highly
available and easily scalable. Organizations typically lease cloud-based resources
from outside the organization. Cloud computing has many benefits for organizations,
which include but are not limited to:
Resource Pooling
o Broadnetwork Access
o Rapid Elasticity
o Measured Service
o On-Demand Self-Service
Usage is metered and priced according to units (or instances) consumed. This
can also be billed back to specific departments or functions.
Reduced cost of ownership. There is no need to buy any assets for everyday
use, no loss of asset value over time and a reduction of other related costs of
maintenance and support.
Reduced energy and cooling costs, along with “green IT” environment effect with
optimum use of IT resources and systems.
Allows an enterprise to scale up new software or data-based services/solutions
through cloud systems quickly and without having to install massive hardware
locally.
Service Models
Some cloud-based services only provide data storage and access. When storing data in
the cloud, organizations must ensure that security controls are in place to prevent
unauthorized access to the data. There are varying levels of responsibility for assets
depending on the service model. This includes maintaining the assets, ensuring they
remain functional, and keeping the systems and applications up to date with current
patches. In some cases, the cloud service provider is responsible for these steps. In
other cases, the consumer is responsible for these steps.
Services
Deployment Models
Clouds * Public: what we commonly refer to as the cloud for the public user. There
is no real mechanism, other than applying for and paying for the cloud service. It
is open to the public and is, therefore, a shared resource that many people will be
able to use as part of a resource pool. A public cloud deployment model includes
assets available for any consumers to rent or lease and is hosted by an external cloud
service provider (CSP). Service level agreements can be effective at ensuring the CSP
provides the cloud-based services at a level acceptable to the organization.
* Private: it begins with the same technical concept as public clouds, **except that
instead of being shared with the public, they are generally developed and deployed for a
private organization that builds its own cloud**. Organizations can create and host
private clouds using their own resources. Therefore, this deployment model includes
cloud-based assets for a single organization. As such, the organization is responsible
for all maintenance. However, an organization can also rent resources from a third party
and split maintenance requirements based on the service model (SaaS, PaaS or IaaS).
Private clouds provide organizations and their departments private access to the
computing, storage, networking and software assets that are available in the private
cloud.
* Community: it can be either public or private. **What makes them unique is that they
are generally developed for a particular community**. An example could be a public
community cloud focused primarily on organic food, or maybe a community cloud
focused specifically on financial services. The idea behind the community cloud is that
people of like minds or similar interests can get together, share IT capabilities and
services, and use them in a way that is beneficial for the particular interests that they
share.
Some other common MSP implementations are: Augment in-house staff for projects;
Utilize expertise for implementation of a product or service; Provide payroll services;
Provide Help Desk service management; Monitor and respond to security incidents;
Manage all in-house IT infrastructure.
Think of a rule book and legal contract—that combination is what you have in a
service-level agreement (SLA). Let us not underestimate or downplay the importance
of this document/ agreement. In it, the minimum level of service, availability,
security, controls, processes, communications, support and many other crucial
business elements are stated and agreed to by both parties.
Network Design
Defense in Depth
Defense in depth uses a layered approach when designing the security posture of
an organization. Think about a castle that holds the crown jewels. The jewels will be
placed in a vaulted chamber in a central location guarded by security guards. The castle
is built around the vault with additional layers of security—soldiers, walls, a moat. The
same approach is true when designing the logical security of a facility or system. Using
layers of security will deter many attackers and encourage them to focus on other,
easier targets.
Data: Controls that protect the actual data with technologies such as encryption,
data leak prevention, identity and access management and data controls.
Application: Controls that protect the application itself with technologies such
as data leak prevention, application firewalls and database monitors.
Host: Every control that is placed at the endpoint level, such as antivirus,
endpoint firewall, configuration and patch management.
Internal network: Controls that are in place to protect uncontrolled data flow
and user access across the organizational network. Relevant technologies
include intrusion detection systems, intrusion prevention systems, internal
firewalls and network access controls.
Perimeter: Controls that protect against unauthorized access to the network.
This level includes the use of technologies such as gateway firewalls,
honeypots, malware analysis and secure demilitarized zones (DMZs).
Physical: Controls that provide a physical barrier, such as locks, walls or
access control.
Policies, procedures and awareness: Administrative controls that
reduce insider threats (intentional and unintentional) and identify risks as
soon as they appear.
Zero Trust
Zero trust networks are often microsegmented networks, with firewalls at nearly
every connecting point. Zero trust encapsulates information assets, the services that
apply to them and their security properties. This concept recognizes that once inside
a trust-but-verify environment, a user has perhaps unlimited capabilities to roam
around, identify assets and systems and potentially find exploitable
vulnerabilities. Placing a greater number of firewalls or other security boundary control
devices throughout the network increases the number of opportunities to detect a
troublemaker before harm is done. Many enterprise architectures are pushing this
to the extreme of microsegmenting their internal networks, which enforces
frequent re-authentication of a user ID.
Zero trust is an evolving design approach which recognizes that even the most
robust access control systems have their weaknesses. It adds defenses at the user,
asset and data level, rather than relying on perimeter defense. In the extreme, it insists
that every process or action a user attempts to take must be authenticated and
authorized; the window of trust becomes vanishingly small.
While microsegmentation adds internal perimeters, zero trust places the focus on
the assets, or data, rather than the perimeter. Zero trust builds more effective
gates to protect the assets directly rather than building additional or higher walls.
Considering just IoT for a moment, it is important to understand the range of devices
that might be found within an organization.
The NAC device will provide the network visibility needed for access security and
may later be used for incident response. Aside from identifying connections, it should
also be able to provide isolation for noncompliant devices within a quarantined network
and provide a mechanism to “fix” the noncompliant elements, such as turning on
endpoint protection. In short, the goal is to ensure that all devices wishing to join the
network do so only when they comply with the requirements laid out in the organization
policies. This visibility will encompass internal users as well as any temporary users
such as guests or contractors, etc., and any devices they may bring with them into the
organization.
Let’s consider some possible use cases for NAC deployment: Medical devices; IoT
devices; BYOD/mobile devices (laptops, tablets, smartphones); Guest users and
contractors;
It is critically important that all mobile devices, regardless of their owner, go through an
onboarding process, ideally each time a network connection is made, and that the
device is identified and interrogated to ensure the organization’s policies are being met.
Network-enabled devices are any type of portable or nonportable device that has
native network capabilities. This generally assumes the network in question is a
wireless type of network, typically provided by a mobile telecommunications company.
Network-enabled devices include smartphones, mobile phones, tablets, smart TVs
or streaming media players, network-attached printers, game systems, and much
more.
The Internet of Things (IoT) is the collection of devices that can communicate over
the internet with one another or with a control console in order to affect and
monitor the real world. IoT devices might be labeled as smart devices or smart-home
equipment. Many of the ideas of industrial environmental control found in office
buildings are finding their way into more consumer-available solutions for small offices
or personal homes.
Embedded systems and network-enabled devices that communicate with the internet
are considered IoT devices and need special attention to ensure that communication is
not used in a malicious manner. Because an embedded system is often in control of a
mechanism in the physical world, a security breach could cause harm to people and
property. Since many of these devices have multiple access routes, such as ethernet,
wireless, Bluetooth, etc., special care should be taken to isolate them from other
devices on the network. You can impose logical network segmentation with switches
using VLANs, or through other traffic-control means, including MAC addresses, IP
addresses, physical ports, protocols, or application filtering, routing, and access control
management. Network segmentation can be used to isolate IoT environments.
Microsegmentation
The toolsets of current adversaries are polymorphic in nature and allow threats to
bypass static security controls. Modern cyberattacks take advantage of traditional
security models to move easily between systems within a data center.
Microsegmentation aids in protecting against these threats. A fundamental design
requirement of microsegmentation is to understand the protection requirements
for traffic within a data center and traffic to and from the internet traffic flows.
When organizations avoid infrastructure-centric design paradigms, they are more likely
to become more efficient at service delivery in the data center and become apt at
detecting and preventing advanced persistent threats.
Virtual local area networks (VLANs) allow network administrators to use switches to
create software-based LAN segments, which can segregate or consolidate traffic
across multiple switch ports. Devices that share a VLAN communicate through
switches as if they were on the same Layer 2 network. Since VLANs act as discrete
networks, communications between VLANs must be enabled. Broadcast traffic is limited
to the VLAN, reducing congestion and reducing the effectiveness of some attacks.
Administration of the environment is simplified, as the VLANs can be reconfigured when
individuals change their physical location or need access to different services. VLANs
can be configured based on switch port, IP subnet, MAC address and protocols. VLANs
do not guarantee a network’s security. At first glance, it may seem that traffic cannot be
intercepted because communication within a VLAN is restricted to member devices.
However, there are attacks that allow a malicious user to see traffic from other VLANs
(so-called VLAN hopping). The VLAN technology is only one tool that can improve the
overall security of the network environment.
Understanding risks
Risk assessment
Risk assessment identifies and triages risks.
we have two different categories of technique that we can use to assess the likelihood
and Impact of a risk.
1. Risk avoidance
o Risk avoidance changes business practices to make a risk irrelevant.
2. Risk transference
o Risk treatment analyzes and implements possible responses to control
risk.
3. Risk mitigation
o Risk mitigation reduces the likelihood or impact of a risk.
4. Risk acceptance
o Risk acceptance is the choice to continue operations in the face of a risk.
Security controls reduce the likelihood or impact of a risk and help identify issues.
1. Control Purpose
i. Preventive
Preventive controls stop a security issue from occcurring.
ii. Detective
Detective controls identify security issues requiring investigation.
iii. Corrective
Recovery controls remediate security issues that have occurred.
2. Control Mechanism
i. Technical
use technology to achieve control objectives.
ii. Administrative
use processes to achieve control objectives.
iii. Physical
Impact the physical world.
Configuration managment
Security Concepts
Confidentiality
Confidentiality_Concerns
1. Snooping
o snooping gathering information that is left out in the open.
o "Clean desk policies" protect against snooping.
2. Dumpster Diving
o Dumpster diving is to dump data anywere or dustbin.
o "Shedding" protects against dumpster diving.
3. Eavesdropping
o listing sensitive information
o "Rules about sensitive conversations" prevent eavesdropping
4. Wiretapping
o Electronic evaesdropping - listing through wire(internet)
o "Encryption" protects against Wiretapping
5. Social Engineering
o The attacker uses psychological tricks to persuade an employee to give
then sensitive information or access to internal systems.
o Best defence is to "Educating users"
Integrity
1. Unauthorized modification
o Attacks make changes without permission.
o "Least priviege" protects against integrity attacks
2. Impersonation
o Attacks pretend to be someone else
o "User education" protects against attacks
3. Man-in-the-middle (MITM)
o Attacks place the attacker in the middle of a communications session.
o "Encryption" protects against MITM attacks
4. Replay
o Attacks eavesdrop on logins and reuse the captured credentials.
o "Encryption" protects against Replay attacks
Availability
Availability_Concerns
The access control process consists of three steps that you must understand. These
steps are identification, authentication and authorization.
1. Identification incolves making a claim of identity.
o Electronic identification commonly uses usernames
2. Authentication requires proving a claim of identity.
o Electronic autherntication commonly uses passwords.
3. Authorization ensures that an action is allowed.
o Electronic authorization commonly uses access control lists.
Password security
Multifactor authentication
Solved the issue with 1. Signed contracts 2. Digital signatures 3. Video surveillance
Privacy
Privacy Concerns
Private information may come in many forms. Two of the most common elements of
private information are "Personally identifiable information" and "Protected health
information".
1. Personally identifiable information, or PII, includes all information that can be tied
back to a specific individual.
2. Protected health information, or PHI, includes healthcare records that are
regulated under the Health Insurance Portability and Accountability
Act. Otherwise known as HIPAA.
Confidentiality
Data that needs protections is also known as PII or PHI. PII stands for Personally
Identifiable Information and it is related to the area of confidentiality and it means any
data that could be used to identify an individual. PHI stands for Protected Health
Information and it comprehends information about one's health status, and classified or
sensitive information, which includes trade secrets, research, business plans and
intellectual property.
1. Snooping involves gathering information that is left out in the open. Clean desk
policies protect against snooping.
2. Dumpster diving also looking for sensitive materials, but in the dumpster, a paper
shredding protects against it.
3. Eavesdropping occurs when someone secretly listen to a conversation, and it
can be prevent with rules about sensitive conversations
4. Wiretapping is the electronic version of eavesdropping, the best way against that
is using encryption to protect the communication.
5. Social Engineering, the best defense is educate users to protect them against
social engineering.
Integrity
Consistency is another concept related to integrity and requires that all instances of the
data be identical in form, content and meaning. When related to system integrity, it
refers to the maintenance of a known good configuration and expected operational
function as the system processes the information. Ensuring integrity begins with an
awareness of state, which is the current condition of the system. Specifically, this
awareness concerns the ability to document and understand the state of data or a
system at a certain point, creating a baseline. A baseline, which means a documented,
lowest level of security configuration allowed by a standard or organization, can refer to
the current state of the information—whether it is protected.
Availability
It means that systems and data are accessible at the time users need them. It can be
defined as timely and reliable access to information and the ability to use it, and for
authorized users, timely and reliable access to data and information services. The core
concept of availability is that data is accessible to authorized users when and where it
is needed and in the form and format required. This does not mean that data or
systems are available 100% of the time. Instead, the systems and data meet the
requirements of the business for timely and reliable access. Some systems and data
are far more critical than others, so the security professional must ensure that the
appropriate levels of availability are provided. This requires consultation with
the involved business to ensure that critical systems are identified and available.
Availability is often associated with the term criticality, which means a measure of the
degree to which an organization depends on the information or information system for
the success of a mission or of a business function (NIST SP 800-60), because it
represents the importance an organization gives to data or an information system in
performing its operations or achieving its mission
Identification
Authentication
When users have stated their identity, it is necessary to validate that they are the
rightful owners of that identity. This process of verifying or proving the user’s
identification is known as authentication, which means in another terms access control
process validating that the identity being claimed by a user or entity is known to the
system, by comparing one (single-factor or SFA) or more (multi-factor authentication or
MFA) factors of authentication. Simply put, authentication is a process to prove the
identity of the requestor.
Methods of Authentication
There are two types of authentication. Using only one of the methods of authentication
stated previously is known as single-factor authentication (SFA). Granting users
access only after successfully demonstrating or displaying two or more of these
methods is known as multi-factor authentication (MFA).
Knowledge-based
Token-based
Characteristic-based
Password
Authorization
Accounting
Non-repudiation
Base Concepts
1. Authorization: the right or a permission that is granted to a system entity to
access a system resource
2. Integrity: the property that data has not been altered in an unauthorized manner
3. Confidentiality: the characteristic of data or information when it is not made
available or disclosed to unauthorized persons or process
4. Privacy: the right of an individual to control the distribution of information about
themselves
5. Availability: Ensuring timely and reliable access to and use of information by
authorized users
6. Non-repudiation: The inability to deny taking an action, such as sending an email
message
7. Authentication: Access control process that compares one or more factors of
identification to validate that the identity claimed by a user or entity is known to
the system
Privacy
Information security risk reflects the potential adverse impacts that result from the
possibility of unauthorized access, use, disclosure, disruption, modification or
destruction of information and/or information systems. This definition represents
that risk is associated with threats, impact and likelihood, and it also indicates that
IT risk is a subset of business risk.
Finally, the security team will consider the likely results if a threat is realized and an
event occurs. Impact is the magnitude of harm that can be expected to result from the
consequences of unauthorized disclosure of information, unauthorized modification of
information, unauthorized destruction of information, or loss of information or information
system availability.
Think about the impact and the chain of reaction that can result when an event occurs
by revisiting the pickpocket scenario: Risk comes from the intersection of those
three concepts.
Risk Identification
Risk Assessment
Risk Treatment
Risk treatment relates to making decisions about the best actions to take regarding
the identified and prioritized risk. The decisions made are dependent on the attitude
of management toward risk and the availability — and cost — of risk mitigation. The
options commonly used to respond to risk are:
Base Concepts
Risk Priorities
When risks have been identified, it is time to prioritize and analyze core risks through
qualitative risk analysis and/or quantitative risk analysis. This is necessary to
determine root cause and narrow down apparent risks and core risks. Security
professionals work with their teams to conduct both qualitative and quantitative analysis.
Understanding the organization’s overall mission and the functions that support the
mission helps to place risks in context, determine the root causes and prioritize the
assessment and analysis of these items. In most cases, management will provide
direction for using the findings of the risk assessment to determine a prioritized set of
risk-response actions.
One effective method to prioritize risk is to use a risk matrix, which helps identify
priority as the intersection of likelihood of occurrence and impact. It also gives the
team a common language to use with management when determining the final
priorities. For example, a low likelihood and a low impact might result in a low priority,
while an incident with a high likelihood and high impact will result in a high priority.
Assignment of priority may relate to business priorities, the cost of mitigating a risk or
the potential for loss if an incident occurs.
When making decisions based on risk priorities, organizations must evaluate the
likelihood and impact of the risk as well as their tolerance for different sorts of risk. A
company in Hawaii is more concerned about the risk of volcanic eruptions than a
company in Chicago, but the Chicago company will have to plan for blizzards. In
those cases, determining risk tolerance is up to the executive management and board
of directors. If a company chooses to ignore or accept risk, exposing workers to
asbestos, for example, it puts the company in a position of tremendous liability.
Risk Tolerance
The perception management takes toward risk is often likened to the entity’s appetite
for risk. How much risk are they willing to take? Does management welcome risk or
want to avoid it? The level of risk tolerance varies across organizations, and even
internally: Different departments may have different attitudes toward what is acceptable
or unacceptable risk.
Governance Elements
When leaders and management implement the systems and structures that the
organization will use to achieve its goals, they are guided by laws and regulations
created by governments to enact public policy. Laws and regulations guide the
development of standards, which cultivate policies, which result in procedures.
Hardening is the process of applying secure configurations (to reduce the attack
surface) and locking down various hardware, communications systems and software,
including the operating system, web server, application server and applications, etc.
This module introduces configuration management practices that will ensure systems
are installed and maintained according to industry and organizational security
standards.
Data Handling
Data itself goes through its own life cycle as users create, use, share and modify it.
The data security life cycle model is useful because it can align easily with the
different roles that people and organizations perform during the evolution of data
from creation to destruction (or disposal). It also helps put the different data states
of in use, at rest and in motion, into context.
All ideas, data, information or knowledge can be thought of as going through six major
sets of activities throughout its lifetime. Conceptually, these involve:
Highly restricted: Compromise of data with this sensitivity label could possibly
put the organization’s future existence at risk. Compromise could lead to
substantial loss of life, injury or property damage, and the litigation and claims
that would follow. Moderately restricted: Compromise of data with this
sensitivity label could lead to loss of temporary competitive advantage, loss of
revenue or disruption of planned investments or activities. Low sensitivity
(sometimes called “internal use only”): Compromise of data with this
sensitivity label could cause minor disruptions, delays or impacts. Unrestricted
public data: As this data is already published, no harm can come from further
dissemination or disclosure.
Log reviews are an essential function not only for security assessment and testing but
also for identifying security incidents, policy violations, fraudulent activities and
operational problems near the time of occurrence. Log reviews support audits –
forensic analysis related to internal and external investigations – and provide support for
organizational security baselines. Review of historic audit logs can determine if a
vulnerability identified in a system has been previously exploited.
Encryption Overview
Almost every action we take in our modern digital world involves cryptography.
Encryption protects our personal and business transactions; digitally signed software
updates verify their creator’s or supplier’s claim to authenticity. Digitally signed
contracts, binding on all parties, are routinely exchanged via email without fear of being
repudiated later by the sender.
Domain D5.2.1
Configuration Management
i. Identification: baseline identification of a system and all its components,
interfaces and documentation.
ii. Baseline: a security baseline is a minimum level of protection that can be
used as a reference point. Baselines provide a way to ensure that updates
to technology and architectures are subjected to the minimum understood
and acceptable level of security requirements.
iii. Change Control: An update process for requesting changes to a
baseline, by means of making changes to one or more components in that
baseline. A review and approval process for all changes. This includes
updates and patches.
iv. Verification & Audit: A regression and validation process, which may
involve testing and analysis, to verify that nothing in the system was
broken by a newly applied set of changes. An audit process can validate
that the currently in-use baseline matches the sum total of its initial
baseline plus all approved changes applied in sequence.
Security governance that does not align properly with organizational goals can lead to
implementation of security policies and decisions that unnecessarily inhibit productivity,
impose undue costs and hinder strategic intent.
All policies must support any regulatory and contractual obligations of the organization.
Sometimes it can be challenging to ensure the policy encompasses all requirements
while remaining simple enough for users to understand.
Here are six common security-related policies that exist in most organizations.
Data Handling Policy: Appropriate use of data: This aspect of the policy defines
whether data is for use within the company, is restricted for use by only certain
roles or can be made public to anyone outside the organization. In addition,
some data has associated legal usage definitions. The organization’s policy
should spell out any such restrictions or refer to the legal definitions as required.
Proper data classification also helps the organization comply with pertinent laws
and regulations. For example, classifying credit card data as confidential can
help ensure compliance with the PCI DSS. One of the requirements of this
standard is to encrypt credit card information. Data owners who correctly defined
the encryption aspect of their organization’s data classification policy will require
that the data be encrypted according to the specifications defined in this
standard.
Password Policy: Every organization should have a password policy in place that
defines expectations of systems and users. The password policy should describe
senior leadership's commitment to ensuring secure access to data, outline any
standards that the organization has selected for password formulation, and
identify who is designated to enforce and validate the policy.
Acceptable Use Policy (AUP): The acceptable use policy (AUP) defines
acceptable use of the organization’s network and computer systems and can
help protect the organization from legal action. It should detail the appropriate
and approved usage of the organization’s assets, including the IT environment,
devices and data. Each employee (or anyone having access to the organization’s
assets) should be required to sign a copy of the AUP, preferably in the presence
of another employee of the organization, and both parties should keep a copy of
the signed AUP.
Policy aspects commonly included in AUPs: Data access, System access, Data
disclosure, Passwords, Data retention, Internet usage, Company device usage
Bring Your Own Device (BYOD): An organization may allow workers to acquire
equipment of their choosing and use personally owned equipment for business
(and personal) use. This is sometimes called bring your own device (BYOD).
Another option is to present the teleworker or employee with a list of approved
equipment and require the employee to select one of the products on the trusted
list.
Letting employees choose the device that is most comfortable for them may be good for
employee morale, but it presents additional challenges for the security professional
because it means the organization loses some control over standardization and privacy.
If employees are allowed to use their phones and laptops for both personal and
business use, this can pose a challenge if, for example, the device has to be examined
for a forensic audit. It can be hard to ensure that the device is configured securely and
does not have any backdoors or other vulnerabilities that could be used to access
organizational data or systems.
All employees must read and agree to adhere to this policy before any access to the
systems, network and/or data is allowed. If and when the workforce grows, so too will
the problems with BYOD. Certainly, the appropriate tools are going to be necessary to
manage the use of and security around BYOD devices and usage. The organization
needs to establish clear user expectations and set the appropriate business rules.
Privacy Policy: Often, personnel have access to personally identifiable
information (PII) (also referred to as electronic protected health information
[ePHI] in the health industry). It is imperative that the organization documents
that the personnel understand and acknowledge the organization’s policies and
procedures for handling of that type of information and are made aware of the
legal repercussions of handling such sensitive data. This type of documentation
is similar to the AUP but is specific to privacy-related data.
The organization should also create a public document that explains how private
information is used, both internally and externally. For example, it may be required that
a medical provider present patients with a description of how the provider will protect
their information (or a reference to where they can find this description, such as the
provider’s website).
Throughout the system life cycle, changes made to the system, its individual
components and its operating environment all have the capability to introduce new
vulnerabilities and thus undermine the security of the enterprise. Change management
requires a process to implement the necessary changes so they do not adversely affect
business operations.
Policies will be set according to the needs of the organization and its vision and mission.
Each of these policies should have a penalty or a consequence attached in case of
noncompliance. The first time may be a warning; the next might be a forced leave of
absence or suspension without pay, and a critical violation could even result in an
employee’s termination. All of this should be outlined clearly during onboarding,
particularly for information security personnel. It should be made clear who is
responsible for enforcing these policies, and the employee must sign off on them and
have documentation saying they have done so. This process could even include a few
questions in a survey or quiz to confirm that the employees truly understand the policy.
These policies are part of the baseline security posture of any organization. Any security
or data handling procedures should be backed up by the appropriate policies.
Documentation: All of the major change management practices address a common set
of core activities that start with a request for change (RFC) and move through various
development and test stages until the change is released to the end users. From first to
last, each step is subject to some form of formalized management and decision-making;
each step produces accounting or log entries to document its results.
Approval: These processes typically include: Evaluating the RFCs for completeness,
Assignment to the proper change authorization process based on risk and
organizational practices, Stakeholder reviews, resource identification and allocation,
Appropriate approvals or rejections, and Documentation of approval or rejection.
Rollback: Depending upon the nature of the change, a variety of activities may need to
be completed. These generally include: Scheduling the change, Testing the change,
Verifying the rollback procedures, Implementing the change, Evaluating the change for
proper and effective operation, and Documenting the change in the production
environment. Rollback authority would generally be defined in the rollback plan, which
might be immediate or scheduled as a subsequent change if monitoring of the change
suggests inadequate performance.
Purpose
The purpose of awareness training is to make sure everyone knows what is expected of
them, based on responsibilities and accountabilities, and to find out if there is any
carelessness or complacency that may pose a risk to the organization. We will be able
to align the information security goals with the organization’s missions and vision and
have a better sense of what the environment is.
What is Security Awareness Training?
Let’s start with a clear understanding of the three different types of learning activities
that organizations use, whether for information security or for any other purpose:
You’ll notice that none of these have an expressed or implied degree of formality,
location or target audience. (Think of a newly hired senior executive with little or no
exposure to the specific compliance needs your organization faces; first, someone has
to get their attention and make them aware of the need to understand. The rest can
follow.)
Education may help workers in a secure server room understand the interaction of the
various fire and smoke detectors, suppression systems, alarms and their interactions
with electrical power, lighting and ventilation systems. Training would provide those
workers with task-specific, detailed learning about the proper actions each should take
in the event of an alarm, a suppression system going off without an alarm, a ventilation
system failure or other contingency. This training would build on the learning acquired
via the educational activities. Awareness activities would include not only posting the
appropriate signage, floor or doorway markings, but also other indicators to help
workers detect an anomaly, respond to an alarm and take appropriate action. In this
case, awareness is a constantly available reminder of what to do when the alarms go
off.
Education may be used to help select groups of users better understand the ways in
which social engineering attacks are conducted and engage those users in creating and
testing their own strategies for improving their defensive techniques. Training will help
users increase their proficiency in recognizing a potential phishing or similar attempt,
while also helping them practice the correct responses to such events. Training may
include simulated phishing emails sent to users on a network to test their ability to
identify a phishing email. Raising users’ overall awareness of the threat posed by
phishing, vishing, SMS phishing (also called “smishing) and other social engineering
tactics. Awareness techniques can also alert selected users to new or novel approaches
that such attacks might be taking. Let’s look at some common risks and why it’s
important to include them in your security awareness training programs.
Phishing
The use of phishing attacks to target individuals, entire departments and even
companies is a significant threat that the security professional needs to be aware of and
be prepared to defend against. Countless variations on the basic phishing attack have
been developed in recent years, leading to a variety of attacks that are deployed
relentlessly against individuals and networks in a never-ending stream of emails, phone
calls, spam, instant messages, videos, file attachments and many other delivery
mechanisms.
Phishing attacks that attempt to trick highly placed officials or private individuals with
sizable assets into authorizing large fund wire transfers to previously unknown entities
are known as whaling attacks .
Social Engineering
Social engineering is an important part of any security awareness training program for
one very simple reason: bad actors know that it works. For the cyberattackers, social
engineering is an inexpensive investment with a potentially very high payoff. Social
engineering, applied over time, can extract significant insider knowledge about almost
any organization or individual.
Most social engineering techniques are not new. Many have even been taught as basic
fieldcraft for espionage agencies and are part of the repertoire of investigative
techniques used by real and fictional police detectives. A short list of the tactics that we
see across cyberspace currently includes:
Phone phishing or vishing: Using a rogue interactive voice response (IVR) system to re-
create a legitimate-sounding copy of a bank or other institution’s IVR system. The victim
is prompted through a phishing email to call in to the “bank” via a provided phone
number to verify information such as account numbers, account access codes or a PIN
and to confirm answers to security questions, contact information and addresses. A
typical vishing system will reject logins continually, ensuring the victim enters PINs or
passwords multiple times, often disclosing several different passwords. More advanced
systems may be used to transfer the victim to a human posing as a customer service
agent for further questioning.
Password Protection
We use many different passwords and systems. Many password managers will store a
user’s passwords for them so the user does not have to remember all their passwords
for multiple systems. The greatest disadvantage of these solutions is the risk of
compromise of the password manager.
Organizations should encourage the use of different passwords for different systems
and should provide a recommended password management solution for its users.
Reusing passwords for multiple systems, especially using the same password for
business and personal use. Writing down passwords and leaving them in unsecured
areas. Sharing a password with tech support or a co-worker.
CIA Triad Deep Dive
Confidentiality
Integrity
Availability
Systems and data are accessible at the time users need them.
Associated with criticality as it represents the importance an organization gives to
data or an information system in performing its operation or achieving its mission.
Authentication
When users have stated their identity, it is necessary to validate that they are the rightful
owners of that identity. This process of verifying or proving the user’s identification is
known as authentication Simply put, authentication is a process to prove the identity of
the requestor.
Non-Repudiation
Legal term and is defined as the protection against an individual falsely denying
having performed a particular action.
It provides the capability to determine whether a given individual took a particular
action, such as created information, approved information or sent or received a
message.
Non-repudiation methodologies ensure that people are held responsible for
transactions they conducted.
Incident Terminology
Breach
o The loss of control, compromise, unauthorized disclosure, unauthorized
acquisition, or any similar occurrence where:
o a person other than an authorized user accesses or potentially accesses
personally identifiable information;
o or an authorized user accesses personally identifiable information for
other than an authorized purpose.
Event - Any observable occurrence in a network or system.
Exploit - A particular attack. It is named this way because these attacks exploit
system vulnerabilities.
Incident - An event that actually or potentially jeopardizes the CIA of an
information system or the information the system processes, stores or transmits.
Intrusion - A security event, or combination of events, that constitutes a
deliberate security incident in which an intruder gains, or attempts to gain, access
to a system or system resource without authorization.
Threat
o Any circumstance or event with the potential to adversely impact
organizational operations (including mission, functions, image or
reputation),
o organizational assets, individuals, other organizations or the nation
through an information system
o via unauthorized access, destruction, disclosure, modification of
information and/or denial of service.
Vulnerability - Weakness in an information system, system security procedures,
internal controls or implementation that could be exploited by a threat source.
Zero Day - A previously unknown system vulnerability with the potential of
exploitation without risk of detection or prevention because it does not, in
general, fit recognized patterns, signatures or methods.
Components of incident response
Preparation
o Develop a policy approved by management.
o Identify critical data and systems, single points of failure.
o Train staff on incident response.
o Implement an incident response team.
o Practice Incident Identification. (First Response)
o Identify Roles and Responsibilities.
o Plan the coordination of communication between stakeholders.
o Consider the possibility that a primary method of communication may not
be available.
Detection and Analysis
o Monitor all possible attack vectors.
o Analyze incident using known data and threat intelligence.
o Prioritize incident response.
o Standardize incident documentation.
Containment
o Gather evidence.
o Choose an appropriate containment strategy.
o Identify the attacker.
o Isolate the attack.
Post-Incident Activity
o Identify evidence that may need to be retained.
o Document lessons learned.
o Retrospective
Preparation.
Detection and Analysis.
Containment, Eradication and Recovery.
Post-incident Activity.
Governance
Access Control
Access controls are not just about restricting access to information systems and
data, but also about allowing access.
It is about granting the appropriate level of access to authorized personnel and
processes and denying access to unauthorized functions or individuals.
Access is based on three elements:
o Subjects - an entity that requests access to the assets.
o Objects - anything that a subject attempts to access.
o Rules - instruction to allow/deny an object to access.
Defense in depth
Consider a human user identity that is granted various create, read, update, and
delete privileges on a database.
Without privileged access management, the system’s access control would have
those privileges assigned to the administrative user in a static way, effectively
“on” 24 hours a day, every day.
Security would be dependent upon the login process to prevent misuse of that
identity.
Just-in-time (JIT) privileged access management, by contrast, includes role-
based specific subsets of privileges that only become active in real-time when
the identity is requesting the use of a resource or service.
Segregation of duties
Two-Person Integrity
The two-person rule is a security strategy that requires a minimum of two people
to be in an area together, making it impossible for a person to be in the area
alone.
Use of the two-person rule can help reduce insider threats to critical areas by
requiring at least two individuals to be present at any time.
It is possible, of course, that two individuals can willfully work together to bypass
the segregation of duties so that they could jointly commit fraud. This is
called collusion.
Some personnel know one of the combinations and some know the other, but no
one person knows both combinations. Two people must work together to open
the vault; thus, the vault is under dual control.
Controls are electronic methods that limit someone from getting access to
systems, and sometimes even to tangible assets or areas. Types of logical
access controls include:
o Passwords.
o Biometrics (implemented on a system, such as a smartphone or a laptop).
o Badge/token readers connected to a system.
This methodology relies on the discretion of the owner of the access control
object to determine the access control subject’s specific rights.
Hence, security of the object is literally up to the discretion of the object owner.
DACs are not very scalable; they rely on the access control decisions made by
each individual object owner, and it can be difficult to find the source of access
control issues when problems occur.
Enforced across all subjects and objects within the boundary of an information
system.
Only properly designated security administrators, as trusted subjects, can modify
any of the security rules that are established for subjects and objects within the
system.
This also means that for all subjects defined by the organization (that is, known
to its integrated identity management and access control system), the
organization assigns a subset of total privileges for a subset of objects, such that
Role-based access control provides each worker privileges based on what role
they have in the organization.
Monitoring these role-based permissions is important, because if you expand one
person’s permissions for a specific reason -
say, a junior worker’s permissions might be expanded so they can temporarily act
as the department manager -
but you forget to change their permissions back when the new manager is hired,
then the next person to come in at that junior level might inherit those
permissions when it is not appropriate for them to have them.
This is called privilege creep or permissions creep - the gradual accumulation of
access rights beyond what an individual needs to do his or her job.
3
Reply
Network Devices
Device Address
While MAC addresses are generally assigned in the firmware of the interface, IP
hosts associate that address with a unique logical address.
This logical IP address represents the network interface within the network and
can be useful to maintain communications when a physical device is swapped
with new hardware.
Examples are 192.168.1.1 and 2001:db8::ffff:0:1.
Rule 1: When there are continuous zeros (0s) in the IPv6 address notation, they
are replaced with :: This rule is also known as zero compression.
For example,
Rule 2: Leading zeros (0s) in the 16 bits field can be removed (right side). But
each block in which you do this, have at least one number remaining. If the field
contains all zeros (0s), you have to leave one zero (0) remaining.
Removing leading zeros (0s) does not have any effect on the value. However,
you cannot apply this rule to trailing zeros (0s). This rule is also known as leading
zero compression.
For example,
For example,
Physical ports:
Are the ports on the routers, switches, servers, computers, etc. that you connect
the wires,
E.g., fiber optic cables, Cat5 cables, etc., to create a network.
Logical port
Also called a socket - is little more than an address number that both ends of the
communication link agree to use when transferring data.
Ports allow a single IP address to be able to support multiple simultaneous
communications, each using a different port number.
In the Application Layer of the TCP/IP model (which includes the Session,
Presentation, and Application Layers of the OSI model) reside numerous
application - or service-specific protocols.
Protocols
Data types are mapped using port numbers associated with services. For
example, web traffic (or HTTP) is port 80.
Secure web traffic (or HTTPS) is port 443.
You’ll note that in several cases a service (or protocol) may have two ports
assigned, one secure and one insecure.
When in doubt, systems should be implemented using the most secure version
as possible of a protocol and its services.
FTP
Port 21, File Transfer Protocol (FTP) sends the username and password using
plaintext from the client to the server.
This could be intercepted by an attacker and later used to retrieve confidential
information from the server.
The secure alternative, SFTP, on port 22 uses encryption to protect the user
credentials and packets of data being transferred.
Telnet
Port 23, telnet, is used by many Linux systems and any other systems as a basic
text-based terminal.
All information to and from the host on a telnet connection is sent in plaintext and
can be intercepted by an attacker.
This includes username and password as well as all information that is being
presented on the screen, since this interface is all text.
Secure Shell (SSH) on port 22 uses encryption to ensure that traffic between the
host and terminal is not sent in a plaintext format.
SMPT
Port 25, Simple Mail Transfer Protocol (SMTP) is the default unencrypted port for
sending email messages.
Since it is unencrypted, data contained within the emails could be discovered by
network sniffing.
The secure alternative is to use port 587 for SMTP using Transport Layer
Security (TLS) which will encrypt the data between the mail client and the mail
server.
Time
Port 37, Time Protocol, may be in use by legacy equipment and has mostly been
replaced by using port 123 for Network Time Protocol (NTP).
NTP on port 123 offers better error-handling capabilities, which reduces the
likelihood of unexpected errors.
DNS
HTTP
Port 80, HyperText Transfer Protocol (HTTP) is the basis of nearly all web
browser traffic on the internet.
Information sent via HTTP is not encrypted and is susceptible to sniffing attacks.
HTTPS using TLS encryption is preferred, as it protects the data in transit
between the server and the browser.
Note that this is often notated as SSL/TLS.
Secure Sockets Layer (SSL) has been compromised is no longer considered
secure.
It is now recommended for web servers and clients to use Transport Layer
Security (TLS) 1.3 or higher for the best protection.
IMAP
Port 143, Internet Message Access Protocol (IMAP) is a protocol used for
retrieving emails.
IMAP traffic on port 143 is not encrypted and susceptible to network sniffing.
The secure alternative is to use port 993 for IMAP, which adds SSL/TLS security
to encrypt the data between the mail client and the mail server.
SNMP
Ports 161 and 162, Simple Network Management Protocol, are commonly used
to send and receive data used for managing infrastructure devices.
Because sensitive information is often included in these messages, it is
recommended to use SNMP version 2 or 3 (abbreviated SNMPv2 or SNMPv3) to
include encryption and additional security features.
Unlike many others discussed here, all versions of SNMP use the same ports, so
there is not a definitive secure and insecure pairing.
Additional context will be needed to determine if information on ports 161 and
162 is secured or not.
SMB
Port 445, Server Message Block (SMB), is used by many versions of Windows
for accessing files over the network.
Files are transmitted unencrypted, and many vulnerabilities are well-known.
Therefore, it is recommended that traffic on port 445 should not be allowed to
pass through a firewall at the network perimeter.
A more secure alternative is port 2049, Network File System (NFS). Although
NFS can use encryption, it is recommended that NFS not be allowed through
firewalls either.
LDAP
Types of threats
Spoofing
An attack with the goal of gaining access to a target system through the use of a
falsified identity.
Spoofing can be used against IP addresses, MAC address, usernames, system
names, wireless network SSIDs, email addresses, and many other types of
logical identification.
Phishing
Virus
The computer virus is perhaps the earliest form of malicious code to plague
security administrators.
As with biological viruses, computer viruses have two main functions ->
propagation and destruction.
A virus is a self-replicating piece of code that spreads without the consent of a
user, but frequently with their assistance (a user has to click on a link or open a
file).
Worm
Trojan
Named after the ancient story of the Trojan horse, the Trojan is a software
program that appears benevolent but carries a malicious, behind-the-scenes
payload that has the potential to wreak havoc on a system or network.
For example, ransomware often uses a Trojan to infect a target machine and
then uses encryption technology to encrypt documents, spreadsheets and other
files stored on the system with a key known only to the malware creator.
On-path Attack
Side-channel
Insider Threat
Threats that arise from individuals who are trusted by the organization.
These could be disgruntled employees or employees involved in espionage.
Insider threats are not always willing participants.
A trusted user who falls victim to a scam could be an unwilling insider threat.
[[Part 1 - Describe the concepts of security, compliance, and identity]]
Malware
A program that is inserted into a system, usually covertly, with the intent of
compromising the confidentiality, integrity or availability of the victim’s data,
applications or operating system or otherwise annoying or disrupting the victim.
Ransomware
Automates the inspection of logs and real-time system events to detect intrusion
attempts and system failures.
An IDS is intended as part of a defense-in-depth security plan.
It will work with, and complement, other security mechanisms such as firewalls,
but it does not replace them.
HIDS: Host Based Intrusion Detection Systems - Monitors a single computer or
host.
NIDS: Network Based Intrusion Detection Systems - Moniters a network by traffic
patterns.
The pid or Process ID indicates when the application was executed, like if it’s
very lower it means it started along with the boot sequence else was started
much later.
Preventing Threats
SLA
Network Design
Network segmentation
VLANs are created by switches to logically segment a network without altering its
physical topology.
Defence in depth, Zero Trust - Reference: [[Part 1 - Describe the concepts of security,
compliance, and identity]]
The goal is to ensure that all devices wishing to join the network do so only when
they comply with the requirements laid out in the organization policies.
This visibility will encompass internal users as well as any temporary users such
as guests or contractors, etc., and any devices they may bring with them into the
organization.
Microsegmentation
VLAN
Virtual local area networks (VLANs) allow network administrators to use switches
to create software-based LAN segments within,
and decide whether the VLAN can communicate with each or not,
which can segregate or consolidate traffic across multiple switch ports.
VPN
Data Handling
Data itself goes through its own life cycle as users create, use, share and modify
it.
Degaussing
Labeling
Process and discipline used to ensure that the only changes made to a system
are those that have been authorized and validated.
It is both a decision-making process and a set of control processes.
Identification
Change Control
A regression and validation process, which may involve testing and analysis, to
verify that nothing in the system was broken by a newly applied set of changes.
An audit process can validate that the currently in-use baseline matches the sum
total of its initial baseline plus all approved changes applied in sequence.