0% found this document useful (0 votes)
9 views

Security Operations

Uploaded by

Victor Thuo
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views

Security Operations

Uploaded by

Victor Thuo
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 23

Security

Operations
Uses of Encryption
Encryption is used in two different environments: to protect data at rest and to protect data
in transit.

● Data at Rest—Data at rest is simply stored data. You can encrypt individual files, entire
disks, or the contents of a mobile device. Full-disk encryption (FDE) is a technology
built into some operating systems that automatically encrypts all of the data stored
on a device. FDE technology is particularly useful for laptops, tablets, smartphones,
and other devices that might be lost or stolen.

● Data in Transit—Data in transit is data that's moving over a network. You can protect
this data as well by using encryption. When you access a website using the standard
HTTP protocol, that data is unencrypted, and anyone who observes your network
activity can eavesdrop on your web use.
Data Handling
The data life cycle is a useful way to understand the process that data goes through
within an organization. It covers everything from the time data is first created until
the time it is eventually destroyed.
Data Lifecycle
These traditional approaches certainly have benefits; however, they may be considered
somewhat passive. There may be certain limitations, such as:
● Create
In the first stage of the life cycle, create, the organization generates new data either in an
on-premises system or in the cloud. The create stage also includes modifications to existing data.
● Store
From there, the second stage of the life cycle is store. In this stage, the organization places the

data into one or more storage systems. Again, these can be either on premises or at a cloud

service provider.

● Use
The next stage, use, is where the active use of data takes place. Users and systems view and

process data in this stage.


● Share
In the fourth stage, share, the data is made available to other people through one or more sharing

mechanisms. This might include providing customers with a link to a file, modifying access

controls so that other employees can view it, or other similar actions.

● Archive
When the data is no longer being actively used, it moves to the fifth stage, archive. In this stage,

data is retained in long-term storage where it is not immediately accessible but can be restored

to active use if necessary.

The archiving of data should follow an organization's data retention policy. This policy should state

how long the organization will preserve different types of records and when they should be

destroyed. As a general practice, organizations should dispose of records when they are no longer

necessary for a legitimate business purpose.


● Destroy
In the final stage of the life cycle, destroy, data is eventually destroyed when it is no longer

needed. This destruction should take place using a secure disposal method. Data destruction

must be done in a secure manner to avoid situations where an attacker obtains paper or

electronic media and then manages to reconstruct sensitive data that still exists on that media in

some form.
What type of attack

results out of

improper destruction

of data?
Destroying Electronic Records
The National Institute of Standards and Technology (NIST) provides a set of guidelines for secure
media sanitization in its Special Publication 800-88. It includes three different activities for
sanitizing electronic media:

● Clearing, the most basic sanitization technique, consists simply of writing new data to the
device that overwrites sensitive data. Clearing is effective against most types of casual
analysis.
● Purging is similar to clearing but uses more advanced techniques and takes longer. Purging
might use cryptographic functions to obscure media on disk. Purging also includes the use
of degaussing techniques, which apply strong magnetic fields to securely overwrite data.
● Destroying is the ultimate type of data sanitization. You shred, pulverize(to press or crush
something until it becomes powder or a soft mass), melt, incinerate, or otherwise
completely destroy the media so that it is totally impossible for someone to reconstruct it.
The downside of destruction, of course, is that you can't reuse the media as you would with
clearing or purging.
Destroying Paper Records
You also should destroy paper records when they reach the end of their useful life.

Shredding using a cross-cut shredder cuts them into very small pieces that are very
difficult to reassemble.

Pulping uses chemical processes to remove the ink from paper and return it to pulp
form for recycling into new paper products.

Incineration burns papers, although burning paper is less environmentally friendly


because it creates carbon emissions, and, unlike pulping or shredding, burned paper
can't be recycled.
Data Classification
Information classification serves as the backbone of data governance efforts designed to
help organizations provide appropriate protections for sensitive data.

Data classification policies describe the security levels of information used in an


organization and the process for assigning information to a particular classification level.
The different security categories, or classifications, used by an organization determine the
appropriate storage, handling, and access requirements for classified information.

Security classifications are assigned based on both the sensitivity of information and the
criticality of that information to the enterprise.

Classification Schemes Classification schemes vary but all basically try to group
information into high, medium, and low sensitivity levels and differentiate between public
and private information.
For example, the military uses the following classification scheme to safeguard government data:
Top Secret; Secret; Confidential; Unclassified
A business, however, might use friendlier terms to accomplish the same goal, such as: Highly
Sensitive; Sensitive; Internal; Public
Businesses use these terms to describe how they will handle sensitive proprietary and customer
data.

Data classification is extremely important because it is used as the basis for other data security
decisions. For example, a company might require the use of strong encryption to protect Sensitive
and Highly Sensitive information both at rest and in transit. This is an example of a data handling
requirement.

Labeling
When an organization classifies information, it should also include labeling requirements that
apply consistent markings to sensitive information. Using standard labeling practices ensures that
users are able to consistently recognize sensitive information and handle it appropriately.
Logging
In computing, a log is a record of events that have occurred, typically including a timestamp and
event details. Logs are commonly used to troubleshoot issues, monitor system performance, and
identify security concerns.
Logging is the capturing and storing of events that occur for later analysis. An event is an
occurrence of an activity on any endpoint system or device. Logging is used to support auditing,
troubleshooting, system analysis, and security incident response.

Software programs and systems generate log files containing information about the application,
user device, time, IP address, and more. Various types of logs exist, such as application logs,
system logs, and security logs. Logs can be stored in various formats, including plain text files,
databases, and specialized log management systems.

Log management refers to all activities undertaken to capture, store, and maintain logs so that
they are useful to the organization. Logs are not only highly valuable to the organization but also
highly sensitive and should be protected from modification/deletion and should be accessible
only by authorized users.
Log Monitoring
At the heart of modern IT operations, log monitoring provides valuable insights into
system health and performance.

Log monitoring is the process of collecting, analyzing, and acting on log data from
various sources. This can include applications and infrastructure — compute, network,
and storage.

When developers and operational teams monitor logs, they’re doing so to find
anomalies and issues within a system so that they can troubleshoot those issues as
efficiently as possible.

To guarantee performance, availability, and security, logs need to be continuously


observed by developers and engineering teams. This process, commonly known as log
monitoring, happens in real time as logs are recorded.
What is the
difference between
log monitoring and
log analytics?
Log Monitoring Techniques
It is critical that organizations are capturing the appropriate events in order to be able to perform
troubleshooting and investigation of security incidents. In order to reap the benefits of logging events,
organizations implement processes to regularly review and monitor the logs:
● Manual
Manual log review is when an authorized person logs into a system and manually reviews the log files.
This is typically done when investigating the cause of a system error or security incident.
● Automated
Automated review is accomplished by leveraging tools that aggregate, correlate, and alert on log
data ingested from many different sources (such as operating system [OS] logs, application logs,
network logs, IDP/IPS, antivirus, and so on). This is often accomplished by leveraging a security
information and event management (SIEM) system.
A SIEM system is a tool that ingests logs from various sources and serves as a central secure log
repository. SIEMs also include analysis and correlation capability and often come with built-in rules
that look for suspicious events identified in logs (for example, many failed logins that may indicate a
brute force password attack) and allow security personnel to write their own custom rules to alert on.
System Hardening
Systems—in this case any endpoint such as servers, desktops, laptops, mobile devices,
network devices, and databases—are potential targets of attacks.

System hardening is the practice of making these devices harder to attack by reducing
the entry points an attacker can potentially use to compromise a system.

System hardening usually includes the following activities:

● Mitigating known vulnerabilities in operating systems and applications (via patch


management).

● Applying configuration settings to the system that reduce their attack surface (via
secure configuration baselines). Such settings include disabling unneeded ports,
services, and features; restricting access; implementing security policies; and
configuring logging and alerts.

● Using good configuration management processes to ensure all systems in the


enterprise are properly managed and changes to them are properly controlled.
What is the
difference between
updates and patches?
Patch Management
Patch management is the discipline of ensuring that all systems in the enterprise are kept
fully patched with the most recent security updates from the respective software product
sources.
To do this effectively, most organizations follow some kind of patch management process
or lifecycle. A typical patch management process looks like this:
● Asset management: You can’t patch what you don’t know you have.
● Vulnerability discovery and management: Organizations take steps to actively learn
about the vulnerabilities in their environment.
● Patch acquisition and validation: Most modern software and operating system
vendors have services available whereby they automatically inform customers of
security patches.
● Deployment and reporting: After testing and approval, patches can be deployed to
live production systems.
Configuration Baselines
Security misconfigurations are settings on an operating system or application that are improperly
configured or left non-secure. They can come about due to poorly managed system administration
processes or inexperienced system administration or operations personnel. Examples of
misconfigurations are:

● Network shares left open


● Leaving default accounts, permissions, or credentials enabled
● Enabling unnecessary features such as unneeded services, accounts, and ports
● Ignoring recommended security settings for applications, frameworks, libraries, databases, etc.
● Using outdated or vulnerable software components
● Using weak or incorrect permissions

The best way to avoid security misconfigurations is to write down the correct settings for every system in
its own document called a security baseline and establish rules that require everyone to use the baselines
whenever systems are configured.
Configuration Baselines
For guidance on creating security configuration baselines, many organizations turn to product vendors or
third-party organizations. Vendors such as operating system and network device producers provide
configuration guides, which are baselines for security hardening of their products.

Third-party organizations, including the U.S. Defense Information Systems Agency (DISA) and the Center
for Internet Security (CIS), publish recommended security baselines for many operating systems,
databases, devices, and applications. Many of these recommended baselines not only provide
documented recommendations but also include downloadable scripts for easy implementation of the
hardening settings.
Configuration Management
CM is the process of establishing consistency in engineering, implementation, and operations. CM ensures
that every system in the enterprise is configured the way it should be. CM is implemented by following a
disciplined process that includes establishing baselines, controlling changes to them, and making sure
that all systems on the network are configured using the approved baselines.

CM generally has three components:


● Baselining: A baseline is a reference point for comparison of something. In engineering, a baseline is
the set of data that reflects a design.
● Version control: Every time a baseline is changed, a new version is created. This way, all changes
are tracked, and if the organization needs to go back to a previous version of a configuration or
operating system, they can easily do so.
● Auditing: On a periodic basis the network can be audited to make sure the systems that are
deployed and operating are configured in accordance with the approved baselines.
Examples of configuration management tools include:
Puppet; Ansible; Salt; Terraform; AWSConfig
Question
Time?

You might also like