0% found this document useful (0 votes)
38 views33 pages

It Security and Assurance Unit 5

Uploaded by

Rekha Manideep
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
38 views33 pages

It Security and Assurance Unit 5

Uploaded by

Rekha Manideep
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 33

MODULE — V

Information Assurance Detection and Recovery Processes, Intrusion Detection and


Prevention System(IDPS), IDPS types, IDPS detection methods, IDPS – Analysis, Log
Management Tools: SIEM, Honeypot/ Honey net, Scanning and Analysis tools,
Malware Detection, Penetration Test, Physical Controls, Special considerations for
Physical security, Information Assurance Measurement Process, Metrics Program,
Incident Handling Process, Continuity Strategies, Computer Forensics, Examiner
Prerequisites, Team Establishment.

INFORMATION ASSURANCE DETECTION AND RECOVERY PROCESSES,


What is information assurance?
“Assurance” in security engineering is defined as the degree of confidence that the
security needs of a system are satisfied.
Information assurance (IA) is the practice of assuring information and managing risks
related to the use, processing, storage and transmission of information. Information
assurance includes protection of the integrity, availability, authenticity, non-
repudiation and confidentiality of user data.
Undetected loopholes in the network can lead to unauthorized access, editing, copying
or deleting of valuable information. This is where information assurance plays a key
role.

INTRUSION DETECTION AND PREVENTION SYSTEM(IDPS)

An intrusion detection and prevention system (IDPS) monitors a network for possible
threats to alert the administrator, thereby preventing potential attacks.
How IDPS Functions
Today’s businesses rely on technology for everything, from hosting applications on
servers to communication. As technology evolves, the attack surface that
cybercriminals have access to also widens. A 2021 Check Point research reported that
there had been 50% more attacks per week on corporate networks in 2021 as compared
to 2020. As such, organizations of all industry verticals and sizes are ramping up their
security posture, aiming to protect every layer of their digital infrastructure from cyber
attacks.
A firewall is a go-to solution to prevent unwanted and suspicious traffic from
flowing into a system. It is tempting to think that firewalls are 100% foolproof and no
malicious traffic can seep into the network. Cybercriminals, however, are constantly
evolving their techniques to bypass all security measures. This is where an intrusion
detection and prevention system comes to the rescue. While a firewall regulates what
gets in, the IDPS regulates what flows through the system. It often sits right behind
firewalls, working in tandem.
An intrusion detection and prevention system is like the baggage and security
check at airports. A ticket or a boarding pass is required to enter an airport, and once
inside, passengers are not allowed to board their flights until the necessary security
checks have been made. Similarly, an intrusion detection system (IDS) only monitors
and alerts bad traffic or policy violations. It is the predecessor of the intrusion
prevention system (IPS), also known as an intrusion detection and prevention system.
Besides monitoring and alerting, the IPS also works to prevent possible incidents with
automated courses of action.
Basic functions of an IDPS
An intrusion detection and prevention system offers the following features:

Basic Functions of an IDPS


 Guards technology infrastructure and sensitive data: No system can exist in
a silo, particularly in the current era of data-driven businesses. Data is
constantly flowing through the network, so the easiest way to attack or gain
access to a system is to hide within the actual data. The IDS part of the system
is reactive, alerting security experts of such possible incidents. The IPS part of
the system is proactive, allowing security teams to mitigate these attacks that
may cause financial and reputational damage.
 Reviews existing user and security policies: Every security-driven
organization has its own set of user policies and access-related policies for its
applications and systems. These policies considerably reduce the attack surface
by providing access to critical resources to only a few trusted user groups and
systems. Continuous monitoring by intrusion detection and prevention systems
ensures that administrators spot any holes in these policy frameworks right
away. It also allows admins to tweak policies to test for maximum security and
efficiency.
 Gathers information about network resources: An IDS-IPS also gives the
security team a bird’s-eye view of the traffic flowing through its networks. This
helps them keep track of network resources, allowing them to modify a system
in case of traffic overload or under-usage of servers.
 Helps meet compliance regulations: All businesses, no matter the industry
vertical, are being increasingly regulated to ensure consumer data privacy and
security. Predominantly, the first step toward fulfilling these mandates is to
deploy an intrusion detection and prevention system.
An IDPS works by scanning processes for harmful patterns, comparing system files,
and monitoring user behavior and system patterns. IPS uses web application
firewalls and traffic filtering solutions to achieve incident prevention.
See More: What Is Fraud Detection? Definition, Types, Applications, and Best
Practices
Types of IDPS
Organizations can consider implementing four types of intrusion detection and
prevention systems based on the kind of deployment they’re looking for.
IDPS TYPES
 Network-based intrusion prevention system (NIPS): Network-based
intrusion prevention systems monitor entire networks or network segments for
malicious traffic. This is usually done by analyzing protocol activity. If the
protocol activity matches against a database of known attacks, the
corresponding information isn’t allowed to get through. NIPS are usually
deployed at network boundaries, behind firewalls, routers, and remote access
servers.
 Wireless intrusion prevention system (WIPS): Wireless intrusion prevention
systems monitor wireless networks by analyzing wireless networking specific
protocols. While WIPS are valuable within the range of an organization’s
wireless network, these systems don’t analyze higher network protocols such as
transmission control protocol (TCP). Wireless intrusion prevention systems are
deployed within the wireless network and in areas that are susceptible to
unauthorized wireless networking.
 Network behavior analysis (NBA) system: While NIPS analyze deviations in
protocol activity, network behavior analysis systems identify threats by
checking for unusual traffic patterns. Such patterns are generally a result of
policy violations, malware-generated attacks, or distributed denial of service
(DDoS) attacks. NBA systems are deployed in an organization’s internal
networks and at points where traffic flows between internal and external
networks.
 Host-based intrusion prevention system (HIPS): Host-based intrusion
prevention systems differ from the rest in that they’re deployed in a single host.
These hosts are critical servers with important data or publicly accessible
servers that can become gateways to internal systems. The HIPS monitors the
traffic flowing in and out of that particular host by monitoring running
processes, network activity, system logs, application activity, and configuration
changes.
IDPS DETECTION METHODS

The 3 Intrusion Detection System Methods


Depending on the type of intrusion detection system you choose, your security solution
will rely on a few different detection methods to keep you safe. Here’s a brief rundown
of each one.
1. Signature-Based Intrusion Detection
Signature-Based Intrusion Detection Systems (SIDS) aim to identify patterns and
match them with known signs of intrusions.
A SIDS relies on a database of previous intrusions. If activity within your network
matches the “signature” of an attack or breach from the database, the detection system
notifies your administrator.
Since the database is the backbone of a SIDS solution, frequent database updates are
essential, as SIDS can only identify attacks it recognizes. As a result, if your
organization becomes the target of a never before seen intrusion technique, no amount
of database updates will protect you.
2. Anomaly-Based Intrusion Detection
On the other hand, an Anomaly-Based Intrusion Detection System (AIDS) can identify
these new zero-day intrusions.
An SIDS uses machine learning (ML) and statistical data to create a model of
“normal” behavior. Anytime traffic deviates from this typical behavior, the system
flags it as suspicious.
The primary issue with AIDS vs. SIDS is the potential for false positives. After all, not
all changes are the result of malicious activity; some are simply indications of changes
in organizational behavior. But because a SIDS has no database of known attacks to
reference, it may report any and all anomalies as intrusions.
3. Hybrid Intrusion Detection
A hybrid system combines the best of both worlds. By looking at patterns and one-off
events, a Hybrid Intrusion Detection system can flag new and existing intrusion
strategies.
The only downside to a hybrid system is the even bigger uptick in flagged issues.
However, considering that the purpose of an IDS is to flag potential intrusions, it’s
hard to see this increase in flags as a negative.

IDPS – ANALYSIS

Stateful protocol analysis


Some vendors use the term “deep packet inspection” to refer to performing some type
of stateful protocol analysis, often combined with a firewall capability that can block
communications determined to be malicious.
 This publication uses the term “stateful protocol analysis” because it is
appropriate for analyzing both network-based and host-based activity,
whereas “deep packet inspection” is an appropriate term for network-based
activity only.
 Also, historically there has not been consensus in the security community as to
the meaning of “deep packet inspection”.
Stateful protocol analysis is the process of comparing predetermined profiles of
generally accepted definitions of benign protocol activity for each protocol
state against observed events to identify deviations.
Unlike anomaly-based detection, which uses host or network-specific profiles,
stateful protocol analysis relies on vendor-developed universal profiles that specify
how particular protocols should and should not be used.
Pros
The “stateful” in stateful protocol analysis means that the IDPS is capable of
understanding and tracking the state of network, transport, and application
protocols that have a notion of state.
 For example, when a user starts a File Transfer Protocol (FTP) session, the
session is initially in the unauthenticated state. Unauthenticated users should
only perform a few commands in this state, such as viewing help information or
providing usernames and passwords.
 An important part of understanding state is pairing requests with responses, so
when an FTP authentication attempt occurs, the IDPS can determine if it was
successful by finding the status code in the corresponding response. Once the
user has authenticated successfully, the session is in the authenticated state, and
users are expected to perform any of several dozen commands. Performing
most of these commands while in the unauthenticated state would be considered
suspicious, but in the authenticated state performing most of them is considered
benign.
Stateful protocol analysis can identify unexpected sequences of commands, such as
issuing the same command repeatedly or issuing a command without first issuing a
command upon which it is dependent.
Another state tracking feature of stateful protocol analysis is that for protocols that
perform authentication, the IDPS can keep track of the authenticator used for each
session, and record the authenticator used for suspicious activity. This is helpful when
investigating an incident. Some IDPSs can also use the authenticator information to
define acceptable activity differently for multiple classes of users or specific users.
The “protocol analysis” performed by stateful protocol analysis methods usually
includes reasonableness checks for individual commands, such as minimum and
maximum lengths for arguments. If a command typically has a username argument,
and usernames have a maximum length of 20 characters, then an argument with a
length of 1000 characters is suspicious. If the large argument contains binary data, then
it is even more suspicious.
Cons
Stateful protocol analysis methods use protocol models, which are typically based
primarily on protocol standards from software vendors and standards bodies (e.g.,
Internet Engineering Task Force [IETF] Request for Comments [RFC]). The protocol
models also typically take into account variances in each protocol’s implementation.
 Many standards are not exhaustively complete in explaining the details of the
protocol, which causes variations among implementations.
 Also, many vendors either violate standards or add proprietary features,
some of which may replace features from the standards.
 For proprietary protocols, complete details about the protocols are often not
available, making it difficult for IDPS technologies to perform comprehensive,
accurate analysis.
 As protocols are revised and vendors alter their protocol implementations, IDPS
protocol models need to be updated to reflect those changes.
 The primary drawback to stateful protocol analysis methods is that they are
very resource-intensive because of the complexity of the analysis and the
overhead involved in performing state tracking for many simultaneous sessions.

LOG MANAGEMENT TOOLS: SIEM

Security Information and Event Management (SIEM) and log management are
two examples of software tools that allow IT organizations to monitor their security
posture using log files, detect and respond to Indicators of Compromise (IoC) and
conduct forensic data analysis and investigations into network events and possible
attacks.
Key takeaways
 An increasing number of IT organizations are relying on their log files as a
means of monitoring activity on the IT infrastructure and maintaining
awareness of possible security threats
 If your sole requirement is to aggregate log files from a variety of sources into
one place, a log management system might be the simplest and most effective
solution for you.
 If your job is to maintain security of a complex and disparate IT infrastructure
using the most cutting-edge security monitoring tools available, you should be
looking at SIEM software.
 Log management systems are very similar to SEM tools, except that while SEM
tools are purpose-built for cyber security applications, LMS tools are more
geared towards the needs of someone in a systems analyst role who might be
reviewing log files for a purpose besides maintaining security.
SIEM and log management definitions
The key difference between SIEM vs log management systems is in their
treatment and functions with respect to event logs or log files.
A log file is a file that contains records of events that occurred in an operating
system, application, server, or from a variety of other sources. Log files are a valuable
tool for security analysts, as they create a documented trail of all communications to
and from each source. When a cyber-attack occurs, log files can be used to investigate
and analyze where the attack came from and what effects it had on the IT
infrastructure.
Log parsing is a powerful tool used by SIEM to extract data elements from raw
log data. Log parsing in SIEM allows you to correlate data across systems and conduct
analysis to understand each and every incident.Log source for SIEM includes log and
event files leveraged by SIEM including logs from events that occur in an operating
system, application, server, or other sources.
A Log Management System (LMS) is a software system that aggregates and
stores log files from multiple network endpoints and systems in a single location. LMS
applications allow IT organizations to centralize all of their log data from disparate
systems into a single place where they can be viewed and correlated by an IT security
analyst.
A SIEM software system incorporates the features of three types of security tools
into a single application.
1. Security Event Management (SEM) tools are very similar to LMS. They
include functionality for aggregating log files from multiple systems and hosts,
but they are geared toward the needs of IT security analysts instead of system
administrators.
2. Security Information Management (SIM) software tools are used to collect,
monitor and analyze data from computer event logs. They typically include
automated features and alerts triggered by predetermined conditions that might
indicate that the network is compromised. SIM tools help security analysts
automate the incident response process, reduce false positives and generate
accurate reports on the organization's security posture.
3. Security Event Correlation (SEC) software is used to sift through massive
quantities of event logs and discover correlations and connections between
events that could indicate a security issue.
SIEM tools combine all of these functionalities into one application that acts as a
layer of management above existing security controls. SIEM tools collect and
aggregate log data from across the IT infrastructure into a centralized platform where it
can be reviewed by security analysts. They also deliver SIM features, such as
automation and alerts, and the correlative capabilities of SEC tools.
Today's SIEM tools are leveraging modern technologies such as machine learning and
big data analysis to further streamline the process of investigating, detecting and
responding to security threats.
SIEM vs log management: capabilities and features
SIEM monitoring differs from log management in the treatment of log files and
focuses on monitoring event logs. With a focus on monitoring and analysis, SIEM
monitoring leverages features such as automated alerts, reporting, and improving
your incident response processes.
Log management systems are very similar to SIEM tools, except that while
SIEM tools were purpose-built for cyber security applications, LMS tools are more
geared towards the needs of someone in a systems analyst role who might be
reviewing log files for a purpose besides maintaining security.
If your sole requirement is to aggregate log files from a variety of sources into one
place, a log management system might be the simplest and most effective solution for
you. If your job is to maintain the security of a complex and disparate IT infrastructure
using the most cutting-edge security monitoring tools available, you should be looking
at SIEM software.
We can describe the difference between SIEM vs log management tools in terms of the
core features offered by each application. Log management tools are characterized by:
1. Log data collection - LMS aggregates event logs from all operating systems
and applications within a given network.
2. Efficient retention of data - Large networks produce massive volumes of data.
LMS tools incorporate features that support efficient retention of high data
volumes for required lengths of time.
3. Log indexing and search function - Large networks produce millions of event
logs. LMS systems have tools like filtering, sorting and searching that helps
analysts find the information they need.
4. Reporting - The most sophisticated LMS tools can use data from event logs to
automate reports on the IT organization's operational, compliance or security
status or performance.
SIEM tools typically have all of the same features as LMS tools, along with:
1. Threat detection alerts - SIEM tools can identify suspicious event log activity,
such as repeated failed login attempts, excessive CPU usage, and large data
transfers and immediate alert IT security analysts when a possible IoC is
detected.
2. Event correlation - SIEM tools can use machine learning or rules-based
algorithms to draw connections between events in different systems.
3. Dashboarding - SIEM tools include dash-boarding features that enable real-
time monitoring, dashboards can often be customized to feature the most
important or relevant data, increasing the overall visibility of the network and
enabling live monitoring in real-time by a human ope
HONEYPOT/ HONEY NET

What is a Honeypot
A honeypot is a security mechanism that creates a virtual trap to lure attackers. An
intentionally compromised computer system allows attackers to
exploit vulnerabilities so you can study them to improve your security policies. You
can apply a honeypot to any computing resource from software and networks to file
servers and routers.
Honeypots are a type of deception technology that allows you to understand attacker
behavior patterns. Security teams can use honeypots to investigate cybersecurity
breaches to collect intel on how cybercriminals operate. They also reduce the risk of
false positives, when compared to traditional cybersecurity measures, because they are
unlikely to attract legitimate activity.
Honeypots vary based on design and deployment models, but they are all decoys
intended to look like legitimate, vulnerable systems to attract cybercriminals.
Production vs. Research Honeypots
There are two primary types of honeypot designs:
 Production honeypots—serve as decoy systems inside fully operating
networks and servers, often as part of an intrusion detection system (IDS). They
deflect criminal attention from the real system while analyzing malicious
activity to help mitigate vulnerabilities.
 Research honeypots—used for educational purposes and security
enhancement. They contain trackable data that you can trace when stolen to
analyze the attack.
Types of Honeypot Deployments
There are three types of honeypot deployments that permit threat actors to perform
different levels of malicious activity:
 Pure honeypots—complete production systems that monitor attacks through
bug taps on the link that connects the honeypot to the network. They are
unsophisticated.
 Low-interaction honeypots—imitate services and systems that frequently
attract criminal attention. They offer a method for collecting data from blind
attacks such as botnets and worms malware.
 High-interaction honeypots—complex setups that behave like real production
infrastructure. They don’t restrict the level of activity of a cybercriminal,
providing extensive cybersecurity insights. However, they are higher-
maintenance and require expertise and the use of additional technologies like
virtual machines to ensure attackers cannot access the real system.
Honeypot Limitations
Honeypot security has its limitations as the honeypot cannot detect security breaches in
legitimate systems, and it does not always identify the attacker. There is also a risk
that, having successfully exploited the honeypot, an attacker can move laterally to
infiltrate the real production network. To prevent this, you need to ensure that the
honeypot is adequately isolated.
To help scale your security operations, you can combine honeypots with other
techniques. For example, the canary trap strategy helps find information leaks by
selectively sharing different versions of sensitive information with suspected moles or
whistleblowers.
Honeynet: A Network of Honeypots
A honeynet is a decoy network that contains one or more honeypots. It looks like a real
network and contains multiple systems but is hosted on one or only a few servers, each
representing one environment. For example, a Windows honeypot machine, a Mac
honeypot machine and a Linux honeypot machine.
A “honeywall” monitors the traffic going in and out of the network and directs it to the
honeypot instances. You can inject vulnerabilities into a honeynet to make it easy for
an attacker to access the trap.

Example of a honeynet topology


Any system on the honeynet may serve as a point of entry for attackers. The honeynet
gathers intelligence on the attackers and diverts them from the real network. The
advantage of a honeynet over a simple honeypot is that it feels more like a real
network, and has a larger catchment area.
This makes honeynet a better solution for large, complex networks – it presents
attackers with an alternative corporate network which can represent an attractive
alternative to the real one.
Spam Trap: An Email Honeypot
Spam traps are fraud management tools that help Internet Service Providers (ISPs)
identify and block spammers. They help make your inbox safer by blocking
vulnerabilities. A spam trap is a fake email address used to bait spammers. Legitimate
mail is unlikely to be sent to a fake address, so when an email is received, it is most
likely spam.
Types of spam traps include:
 Username typos—the spam filter detects typos resulting from human or
machine error, including and sends the email into the spam folder. This
includes misspelled email addresses like, for example, [email protected] instead
of the real [email protected].
 Expired email accounts—some providers use abandoned email accounts or
expired domain names as spam traps.
 Purchased email lists—these often contain many invalid email addresses that
can trigger a spam trap. Additionally, since the sender didn’t gain authorization
to send emails to the accounts on the list, they can be treated as spammers and
blacklisted.

SCANNING AND ANALYSIS TOOLS,

After making a list of attack-able IPs from Reconnaissance phase, we need to work on
phase 2 of Ethical hacking i.e., Scanning. Process of scanning is divided into 3 parts.
1. Determine if system is on and working.
2. Finding ports on which applications are running.
3. Scanning target system for vulnerabilities.
Ping and Ping Sweeps :
Simplest way to check if a system is alive is to ping that system’s IP address. A ping is
a special form of packet called ICMP packet. On pinging a device IP, an ICMP echo
request message is sent to target, and target system send an Echo reply packet in
response of echo request message.
Echo reply message tells other valuable information other than telling whether system
is alive. It also tells round trip time of packets i.e, time taken by ping message to reach
back to us from target system. It also provides information about packet loss which can
be helpful in determining reliability of network.
A ping sweep is a method of pinging a list of IP automatically. Pinging a large list of
IPs can be time-consuming and problematic. Tool for Ping sweep is Fping. Fping can
be invoked by following command.
Fping -a -g 172.16.10.1 172.16.10.20
 The “-a” switch is used to show a list of only alive IP in our output.
 “-g” switch is used to specify a range of IP.
 In above command range of IP is 172.16.10.1 to 172.16.10.20.
Port Scanning :
In a Computer, there are a total of 65, 536 (0-65, 535) ports. Depending upon nature of
communication and application using a port, it can be either UDP or TCP. Scanning
system for checking which ports are alive and which ports are used by different
applications gave us a better idea about target system.
Port Scanning is done by a tool called Nmap. Nmap is written by Gordon “Fyodor”
Lyon. It is available in both GUI and command-line interface.
Command :
nmap -sT/U -p 172.16.10.5
 “-s” is used to specify connection type.
 -sT means TCP and -sU means UDP connection.
 “-p” means to scan all ports of target IP.

MALWARE DETECTION

Cybercriminals use and develop malware (malicious software) to infiltrate


target computer systems and achieve their objectives. Malware is offensive in nature
and can cause destruction, disruption and numerous other effects to computer systems
to achieve criminal goals.
Conversely, malware detection is a set of defensive techniques and technologies
required to identify, block and prevent the harmful effects of malware. This protective
practice consists of a wide body of tactics, amplified by various tools based on the type
of malware that infected the device.
Read our post listing 12 different types of malware and what they do to better
understand how to detect them and protect against them.
12 Most Common Types of Malware
10 Malware Detection Techniques
An effective security practice uses a combination of expertise and technology to detect
and prevent malware. Tried and proven techniques include:
1. Signature-based detection
Signature-based detection uses known digital indicators of malware to identify
suspicious behavior. Lists of indicators of compromise (IOCs), often maintained in a
database, can be used to identify a breach. While IOCs can be effective in identifying
malicious activity, they are reactive in nature. As a result, CrowdStrike uses indicators
of attack (IOA) to proactively identify in-process cyberattacks.
2. Static file analysis
Examining a file’s code, without running it, to identify signs of malicious intent. File
names, hashes, strings such as IP addresses, and file header data can all be evaluated to
determine whether a file is malicious. While static file analysis is a good starting point,
proficient security teams use additional techniques to detect advanced malware that
can go unidentified during static analysis.
3. Dynamic malware analysis
Dynamic malware analysis executes suspected malicious code in a safe environment
called a sandbox. This closed system enables security professionals to watch and study
the malware in action without the risk of letting it infect their system or escape into the
enterprise network.
4. Dynamic monitoring of mass file operations
Observing mass file operations such as rename or delete commands to identify signs of
tampering or corruption. Dynamic monitoring often uses a file integrity monitoring
tool to track and analyze the integrity of file systems through both reactive forensic
auditing and proactive rules-based monitoring.
5. File extensions blocklist/blocklisting
File extensions are letters occurring after a period in a file name, indicating the format
of the file. This classification can be used by criminals to package malware for
delivery. As a result, a common security method is to list known malicious file
extension types in a “blocklist” to prevent unsuspecting users from downloading or
using the dangerous file.
6. Application allowlist/allowlisting
The opposite of a blocklist/blocklisting, where an organization authorizes a system to
use applications on an approved list. Allowlisting can be very effective in preventing
nefarious applications through rigid parameters. However, it can be difficult to manage
and reduce an organization’s operational speed and flexibility.
7. Malware honeypot/honeypot files
A malware honeypot mimics a software application or an application programming
interface (API) to draw out malware attacks in a controlled, non-threatening
environment. Similarly, a honeypot file is a decoy file to draw and detect attackers. In
doing so, security teams can analyze the attack techniques and develop or enhance
antimalware solutions to address these specific vulnerabilities, threats or actors.
8. Checksumming/cyclic redundancy check (CRC)
A calculation on a collection of data, such as a file, to confirm its integrity. One of the
most common checksums used is a CRC, which involves analysis of both value and
position of a group of data. Checksumming can be effective for identifying corruption
in data but is not foolproof for determining tampering.
9. File entropy/measuring changes of a files’ data
As threat intelligence and cybersecurity evolves, adversaries increasingly create
dynamic malware executables to avoid detection. This results in modified files that
have high entropy levels. As a result, a file’s data change measured through entropy
can identify potential malware.
10. Machine learning behavioral analysis
Machine learning (ML) is a subset of artificial intelligence (AI), and refers to the
process of teaching algorithms to learn patterns from existing data to predict answers
on new data. This technology can analyze file behavior, identify patterns and use these
insights to improve detection of novel and unidentified malware.

PENETRATION TEST

attempts to find and exploit vulnerabilities in a computer system. The purpose of this
simulated attack is to identify any weak spots in a system’s defenses which attackers
could take advantage of.
This is like a bank hiring someone to dress as a burglar and try to break into their
building and gain access to the vault. If the ‘burglar’ succeeds and gets into the bank or
the vault, the bank will gain valuable information on how they need to tighten their
security measures.
Who performs pen tests?
It’s best to have a pen test performed by someone with little-to-no prior knowledge of
how the system is secured because they may be able to expose blind spots missed by
the developers who built the system. For this reason, outside contractors are usually
brought in to perform the tests. These contractors are often referred to as ‘ethical
hackers’ since they are being hired to hack into a system with permission and for the
purpose of increasing security.
Many ethical hackers are experienced developers with advanced degrees and a
certification for pen testing. On the other hand, some of the best ethical hackers are
self-taught. In fact, some are reformed criminal hackers who now use their expertise to
help fix security flaws rather than exploit them. The best candidate to carry out a pen
test can vary greatly depending on the target company and what type of pen test they
want to initiate.
What are the types of pen tests?
 Open-box pen test - In an open-box test, the hacker will be provided with
some information ahead of time regarding the target company’s security info.
 Closed-box pen test - Also known as a ‘single-blind’ test, this is one where the
hacker is given no background information besides the name of the target
company.
 Covert pen test - Also known as a ‘double-blind’ pen test, this is a situation
where almost no one in the company is aware that the pen test is happening,
including the IT and security professionals who will be responding to the
attack. For covert tests, it is especially important for the hacker to have the
scope and other details of the test in writing beforehand to avoid any problems
with law enforcement.
 External pen test - In an external test, the ethical hacker goes up against the
company’s external-facing technology, such as their website and external
network servers. In some cases, the hacker may not even be allowed to enter the
company’s building. This can mean conducting the attack from a remote
location or carrying out the test from a truck or van parked nearby.
 Internal pen test - In an internal test, the ethical hacker performs the test from
the company’s internal network. This kind of test is useful in determining how
much damage a disgruntled employee can cause from behind the company’s
firewall.
How is a typical pen test carried out?
Pen tests start with a phase of reconnaissance, during which an ethical hacker spends
time gathering data and information that they will use to plan their simulated attack.
After that, the focus becomes gaining and maintaining access to the target system,
which requires a broad set of tools.
Tools for attack include software designed to produce brute-force attacks or SQL
injections. There is also hardware specifically designed for pen testing, such as small
inconspicuous boxes that can be plugged into a computer on the network to provide the
hacker with remote access to that network. In addition, an ethical hacker may
use social engineering techniques to find vulnerabilities. For example, sending
phishing emails to company employees, or even disguising themselves as delivery
people to gain physical access to the building.
The hacker wraps up the test by covering their tracks; this means removing any
embedded hardware and doing everything else they can to avoid detection and leave
the target system exactly how they found it.

PHYSICAL CONTROLS
Physical Security Controls
Physical controls are the implementation of security measures in a defined structure
used to deter or prevent unauthorized access to sensitive material.

Examples of physical controls are:

 Closed-circuit surveillance cameras


 Motion or thermal alarm systems
 Security guards
 Picture IDs
 Locked and dead-bolted steel doors
 Biometrics (includes fingerprint, voice, face, iris, handwriting, and other
automated methods used to recognize individuals)

Preventative Controls

Examples of preventative controls include:

 Hardening
 Security Awareness Training
 Security Guards
 Change Management
 Account Disablement Policy

Hardening

Is the process of reducing security exposure and tightening security controls.

Security Awareness Training

The process of providing formal cybersecurity education to your workforce about a


variety of information security threats and your company’s policies and procedures for
addressing them.

Security Guards

A person employed by a public or private party to protect an organization’s assets.


Security guards are frequently positioned as the first line of defense for businesses
against external threats, intrusion and vulnerabilities to the property and its dwellers.

Change Management

The methods and manners in which a company describes and implements change
within both its internal and external processes. This includes preparing and supporting
employees, establishing the necessary steps for change, and monitoring pre- and post-
change activities to ensure successful implementation.

Account Disablement Policy

A policy that defines what to do with user access accounts for employees who leave
voluntarily, immediate terminations, or on a leave of absence.

Detective Controls

Examples of detective controls include:

 Log Monitoring
 SIEM
 Trend Analysis
 Security Audits
 Video Survillance
 Motion Detection

Log Monitoring

Log monitoring is a diagnostic method used to analyze real-time events or stored data
to ensure application availability and to access the impact of the change in state of an
application’s performance.

SPECIAL CONSIDERATIONS FOR PHYSICAL SECURITY

1. Identify your physical weak points and determine your need


The first thing you need to do is figure out is where your vulnerabilities are. For
example it is never a good idea to build a data centre against outside walls, similarly
pay attention to what is housed above and below your data storage facility. By securing
these weak points, you can eliminate the most obvious threat – someone breaking in.
Small data centres especially may be located in a multi-floor building, in which case
consider installing physical barriers, cameras and access control systems. Additionally,
it is important to examine your operational processes so that visitors and contractors
are not let inside your server room accidentally.
2. Keep track of all your workflow processes
It is critical that you keep track of your operations and compliance-related activities.
You want to limit access to your data storage centre to IT staff and organizational
stakeholders. As such, you should regularly monitor your access logs and perform
audit checks. Keep track of peripherals, servers and data centre management software,
looking for any suspicious activity. If your data centre is in a colocation facility, and
you have a trusted provider, most likely your assets are safe and well-maintained.
However, a prudent strategy should involve regular audits, regardless of where the
centre is housed. Remember holding and managing data may well be the very core
purpose of your organisation.

EMKA BioLock– EMKA


3. Watch out for human error
The most common form of data breach is that committed by insiders. It is now
recognised that danger comes in the form of poor engineering, carelessness, or
corporate espionage, but in all cases, people working in your facility pose the biggest
risk. Accordingly, it is necessary that you implement strong security policies that hold
personnel accountable for their access permissions.
It is advisable that you pair access cards with biometric security, such as fingerprint
scans, for the best possible defence. Biometric security is safer than passwords and
much harder to replicate or steal.
Employees will be deterred from lending each other access cards, and if one is stolen,
it will be useless to the individual who tries to access your server room. It is important
to understand that access should never be shared in an organisation.
If the fingerprint data perfectly matches the stored fingerprint template, the reader unit
sends an encrypted “open door” command to the control unit. The unit then opens the
electric lock and logs the date/time of the user entry.
Due to the precision of proprietary fingerprint recognition technology, fake fingers, the
wrong finger, or a finger of someone deceased cannot fool the system and open doors
to access secured areas. Further the authorized person may place on the reader a
“duress finger” programmed into the system to send an alert to security personnel.
4. Educate your people on security policies
A big part of having a strong security system is staff member training eg explaining to
staff why they should not lend each other access cards and instructing them to report
any suspicious activity. Additionally, let them understand that for compliance
purposes, workflow processes are strictly segregated and monitored. Often, regulatory
agencies will want to see who access which piece of information and when.
Eliminating duplication of access means that you are able to adhere to compliance
standards with greater ease.
5. Ask your business stakeholders for their feedback
Once you have a security system fully in place, the next thing for you to do is discuss
your policies with staff members. Ask them if they agree your assets are secure. Are
they accessing data with ease? What are some potential vulnerabilities? It is also a
good idea to talk to your IT staff and get their opinion on the matter.

INFORMATION ASSURANCE MEASUREMENT PROCESS

“measures that protect and defend information and information systems by ensuring
their availability, integrity, authentication, confidentiality, and non-repudiation. These
measures include providing for restoration of information systems by incorporating
protection, detection, and reaction capabilities.”

METRICS PROGRAM

Software Metrics Definition


A software metric is a measurement of quantifiable or countable software
characteristics. Software metrics are essential for various purposes, including
measuring software performance, planning work items, and measuring productivity.
 Industry standards such as ISO 9000 and industry models such as the Software
Engineering Institute’s (SEI) Capability Maturity Model Integrated (CMMI®)
help in Utilising metrics to comprehend better, monitor, manage, and predict
software projects, processes, and products.
 Software metrics can provide engineers and management with the information
required to make technical decisions.
 Everyone involved in selecting, designing, implementing, collecting, and
utilizing a metric must comprehend its definition and purpose if it is to provide
helpful information.
 Software metrics programs must be designed to provide the precise data
required to manage software projects and enhance software engineering
processes and services. Organisational, project, and task objectives are
determined beforehand, and metrics are chosen based on these objectives.
We use the software metrics to determine our efficacy in meeting these objectives:
 Teams can use software metrics to measure performance, plan upcoming work
tasks, track productivity, and better control the production process during
project management by observing various figures and trends during production.
 In conjunction with management functions, they can also use software metrics
to simplify your projects by devising more efficient procedures, creating
software maintenance plans, and keeping production teams informed of issues
that need to be resolved.
 Throughout the software development process, various indicators are
intertwined. The four management functions correspond to software metrics:
planning, organization, control, and enhancement.
The Need for Software Metrics
We reside in a world where quality is essential.
Software metrics provide stakeholders with a quantitative premise for software
development planning and forecasting. In other words, the software quality can be
readily monitored and enhanced. There is widespread agreement that a focus on quality
increases productivity and promotes a culture of continuous improvement.
By measuring problems and defects, information can be obtained that can be used to
regulate software products. These can be broken down into five metrics: Reliability,
Usability, Security, Cost and Schedule, and Efficiency.
Role of Metrics and Measurement in Software Engineering
A measurement represents the size, quantity, volume, or dimension of a specific
attribute of a product or process. Software measurement is a titrate attribute of a
property of a software product or the software development process. It is a leader in
the field of software engineering. ISO Standard defines and regulates the software
measurement procedure.
Software Measurement Principles
The software measurement procedure is comprised of the following five activities:
 Formulation: The derivation of software measures and metrics for representing
the under-consideration software.
 Collection: The mechanism used to collect the data necessary to derive the
calculated metrics.
 Analysis: the computation and application of metrics and mathematical
instruments.
 Interpretation: The evaluation of metrics that provide insight into the
representation’s quality.
 Feedback: Recommendation derived from the analysis of product metrics and
transmitted to the software development team.
Software quality metrics are performance indicators that assess a software
product’s quality. Agile metrics, such as velocity and QA metrics such as test
coverage, are typical examples.
Metrics do not enhance development, but managers utilize them to understand the
production process better. Metrics present all process elements as data and describe
what occurs throughout the endeavor. Without this information, it is difficult for
managers to identify problems and make improvements.
However, with the valuable information provided by software quality metrics,
teams can:
 Anticipate defects
 Identify and correct problems
 Effectively plan development
 Increase efficiency
How to Define Clear Software Metrics?
The best way to demystify software metrics is to understand fully why they are being
monitored in the first place. For e.g if different developers are using other
programming languages to write code, then something simplistic like LOC (lines of
code) should not be used as it will lead to inherent confusion in the amount of work
being done. On the other hand, simply tracking the number of errors may lead to
developers avoiding complex constructs altogether and hampering the product’s
overall usability.
In these cases, it is better to focus on the exact aspects of productivity that need to be
triggered, for e.g. MTTD can be a better metric than simply counting the number of
defects as it gives a better understanding of how agile the team is in unearthing defects
in the first place.
Read More: How to Set Goals for Software Quality
Types of Software Testing Metrics
Broadly, Software Metrics can be classified into the following types:

1. Product Metrics– Product Metrics quantify the features of a software product.


First, the size and complexity of the product, and second, the dependability and
quality of the software are the primary features that are emphasized.
2. Process Metrics– Unlike Product metrics, process metrics assess the
characteristics of software development processes. Multiple factors can be
emphasized, such as identifying defects or errors efficiently. In addition to fault
detection, it emphasizes techniques, methods, tools, and overall software
process reliability.
3. Internal Metrics– Using Internal Metrics, all properties crucial to the software
developer are measured. Line of Control, or LOC, is an example of an internal
metric.
4. External Metrics– Utilising External Metrics, all user-important properties are
measured.
5. Project Metrics– The project managers use this metric system to monitor the
project’s progress. Utilizing past project references to generate data. Time, cost,
labor, etc., are among the most important measurement factors.
Also Read: How to determine the right testing metrics
How to Track & Measure Software Metrics?
An important point to remember is that the metrics list should be defined on a case-by-
case basis.
 Mindlessly repeating the metrics offered by another project or tracking
whatever is provided by a project management tool or a software development
framework/model is an egregious waste of time and effort.
 It is not helpful to track metrics if they won’t help answer inquiries from project
stakeholders or won’t change how teams are working. For instance,
performance metrics are highly valued in a real-time processing system, but
availability is prioritized in a distributed asynchronous system.
 Management teams gain from tracking software metrics by keeping tabs on
software development, establishing objectives, and quickly analyzing results.
 Simplifying software development too much, on the other hand, could cause
engineers to lose sight of the big picture and stop caring about things like
making products that people want to use.
Of course, none of this would be helpful without collecting and analyzing software
metrics. The first problem is that software development teams prioritize action above
tracking metrics.
Optimum Software metrics need to have several key characteristics, such as –
 Simple and programmable.
 Unambiguous and reliable
 Always utilize the same units of measurement.
 Flexible and simple to fine-tune regardless of the programming language used
 Easy to understand
 It is possible to validate for accuracy and dependability.
 Essential in making high-level decisions.
Effective software metrics assist software engineers in identifying flaws in the
software development life cycle so that the software can be developed by user
requirements, within the anticipated time frame and budget, and so on.
The stages enumerated below are used to develop effective software metrics.
1. Create Definitions: To develop an effective metric, it is necessary to have a clear
and concise definition of the entities and their measurable attributes. To avoid
ambiguity, terms such as defect, size, quality, maintainability, and user-friendliness
should be precisely defined.
2. Define a model: A model is derived for the metrics. This model helps define the
calculation of metrics. The model should be easily adaptable to future specifications.
The following questions must be considered when defining a model.:
 Does the model offer more information than is currently accessible?
 Is this information useful?
 Does it contain the desired details?
3. Establish criteria for measurement: The model is decomposed into its lowest-
level metric entities, and the counting criteria (used to measure each entity) are
defined. This defines the measurement approach for each metric primitive. For
instance, line of code (LOC) is a standard metric to estimate the extent of a software
project. Before measuring size in LOC, precise counting criteria must be established.
4. Choose what is desirable: After determining what and how to measure, it is
necessary to determine if action is required. For instance, no corrective action is
needed if the software meets the quality requirements. If this is not the case, then goals
can be established to assist the software in meeting the set quality standards. Note that
the objectives should be reasonable, achievable, and based on actions that provide
support.
5. Statistical reporting: Once all data for a metric has been collected, it should be
reported to the appropriate party. This includes defining the report format, the data
extraction and reporting cycle, the reporting mechanisms, etc.
6. Additional criteria: Additional ‘generic’ qualifiers for metrics should be
determined. In other words, a valid metric for multiple additional extraction qualifiers
must be identified.
The selection and development of software metrics are not complete until the
relationship between measurement and people is understood. The success of metrics in
an organization is contingent on the dispositions of those involved in data collection,
calculation, and reporting, as well as those using the metrics. In addition, metrics
should be centered on the process, initiatives, and products, not the individuals
involved in the activity.

INCIDENT HANDLING PROCESS

1. Preparation
Preparation is the key to effective incident response. Even the best incident response
team cannot effectively address an incident without predetermined guidelines. A strong
plan must be in place to support your team. In order to successfully address security
events, these features should be included in an incident response plan:
 Develop and Document IR Policies: Establish policies, procedures, and
agreements for incident response management.
 Define Communication Guidelines: Create communication standards and
guidelines to enable seamless communication during and after an incident.
 Incorporate Threat Intelligence Feeds: Perform ongoing collection, analysis,
and synchronization of your threat intelligence feeds.
 Conduct Cyber Hunting Exercises: Conduct operational threat hunting
exercises to find incidents occurring within your environment. This allows for
more proactive incident response.
 Assess Your Threat Detection Capability: Assess your current threat
detection capability and update risk assessment and improvement programs.
The following resources may help you develop a plan that meets your company’s
requirements:
 NIST Guide: Guide to Test, Training, and Exercise Programs for IT Plans and
Capabilities
 SANS Guide: SANS Institute InfoSec Reading Room, Incident Handling,
Annual Testing and Training
2. Detection and Reporting
The focus of this phase is to monitor security events in order to detect, alert, and report
on potential security incidents.
 Monitor: Monitor security events in your environment using firewalls,
intrusion prevention systems, and data loss prevention.
 Detect: Detect potential security incidents by correlating alerts within a SIEM
solution.
 Alert: Analysts create an incident ticket, document initial findings, and assign
an initial incident classification.
 Report: Your reporting process should include accommodation for regulatory
reporting escalations.
3. Triage and Analysis
The bulk of the effort in properly scoping and understanding the security incident takes
place during this step. Resources should be utilized to collect data from tools and
systems for further analysis and to identify indicators of compromise. Individuals
should have in-depth skills and a detailed understanding of live system responses,
digital forensics, memory analysis, and malware analysis.
As evidence is collected, analysts should focus on three primary areas:
 Endpoint Analysis
o Determine what tracks may have been left behind by the threat actor.
o Gather the artifacts needed to build a timeline of activities.
o Analyze a bit-for-bit copy of systems from a forensic perspective and
capture RAM to parse through and identify key artifacts to determine
what occurred on a device.
 Binary Analysis
o Investigate malicious binaries or tools leveraged by the attacker and
document the functionalities of those programs. This analysis is
performed in two ways.
1. Behavioral Analysis: Execute the malicious program in a VM to
monitor its behavior
2. Static Analysis: Reverse engineer the malicious program to
scope out the entire functionality.
 Enterprise Hunting
o Analyze existing systems and event log technologies to determine the
scope of compromise.
o Document all compromised accounts, machines, etc. so that effective
containment and neutralization can be performed.
4. Containment and Neutralization
This is one of the most critical stages of incident response. The strategy for
containment and neutralization is based on the intelligence and indicators of
compromise gathered during the analysis phase. After the system is restored and
security is verified, normal operations can resume.
 Coordinated Shutdown: Once you have identified all systems within the
environment that have been compromised by a threat actor, perform a
coordinated shutdown of these devices. A notification must be sent to all IR
team members to ensure proper timing.
 Wipe and Rebuild: Wipe the infected devices and rebuild the operating system
from the ground up. Change passwords of all compromised accounts.
 Threat Mitigation Requests: If you have identified domains or IP addresses
that are known to be leveraged by threat actors for command and control, issue
threat mitigation requests to block the communication from all egress channels
connected to these domains.
5. Post-Incident Activity
There is more work to be done after the incident is resolved. Be sure to properly
document any information that can be used to prevent similar occurrences from
happening again in the future.
 Complete an Incident Report: Documenting the incident will help to improve
the incident response plan and augment additional security measures to avoid
such security incidents in the future.
 Monitor Post-Incident: Closely monitor for activities post-incident since
threat actors will re-appear again. We recommend a security log hawk
analyzing SIEM data for any signs of indicators tripping that may have been
associated with the prior incident.
 Update Threat Intelligence: Update the organization’s threat intelligence
feeds.
 Identify preventative measures: Create new security initiatives to prevent
future incidents.
 Gain Cross-Functional Buy-In: Coordinating across the organization is
critical to the proper implementation of new security initiatives.

CONTINUITY STRATEGIES

Output of BC Strategy
The output of the business continuity (BC) strategy phase would generally include a
strategy for mitigation, (crisis) response, and recovery.
(a) Mitigation Strategy
The mitigation strategy draws from the risk assessment performed in an earlier "Risk
Analysis and Analysis" phase. Risks that remain high despite the presence of
mitigating controls should be reviewed. There is a need to review the reasons:
 Are the implemented controls ineffective, or are there other causes that drive
likelihood and/or impact variables up, in spite of these controls?
 Are there multiple causes of a risk, and have we addressed all or only some of
them? Obviously high-risk threats cannot be ignored and must be mitigated to
the best of our ability.
These threats must be identified and further attempts to lower the risk posed by them
must be implemented with the objective to preventing any potential disruption. In
addition, a mechanism must be in place to detect and sound the alarm should an threat
materialize. These detection mechanisms could take the form of monitoring tools that
captures and records abnormal changes in the environment or process.
While it is always better to prevent a disaster from happening, it is impossible to say
with one hundred percent certainty that one will never occur. In the unfortunate event
that a disaster causes business operations to be disrupted, a strategy is required to
ensure effective and timely recovery and resumption.
(b) Recovery Strategy
The recovery strategy should focus on re-gaining or re-establishing what has been
lost in the disaster.
 Think people, facilities, systems, records, equipment and the like.
 What has the disaster deprived the organisation of, and what resource needs to
be recovered to allow the organisation to carry out its critical business functions
and meet its minimum committed service levels?
 How quickly must these resources be made available? Then brainstorm on how
to acquire these resources within the acceptable time frame, guided by the
associated business function recovery time objective (RTO).
 What resources could be built or acquired by the organisation in anticipation of
a disaster. This model gives the highest level of recovery assurance as the
critical resource is guaranteed. For example, facilities, like a hot site, could be
purpose-built so that in the event of a disaster, a critical function can be
immediately up and running.
Alternatively, an organisation that does not or chooses not to own spare resource,
could lease the resource. An example of leasing is to subscribe to a shared recovery
space with a reputable service provider. There is some minimal assurance that
recovery seats are available; however, as with such a model, there is no guarantee - the
seats are shared and the first caller activating the recovery seats will be given priority.
Yet other organisations may choose to procure resources only when a disaster occurs.
This model gives the least recovery assurance as the required resources may not be
available when needed most.
In developing the recovery strategy, not only must one think about getting back
resources needed to continue critical business operations, one must also keep in mind
that the recovery must be done within the prescribed RTOs for these critical
operations. If a resource cannot be recovered in this time, an alternative means or
interim method of carrying on the critical operation must be found. These interim
measures are often called Temporary Operating Procedures (TOP).
(c) Crisis Response Strategy
Where an organisation does not already have an incident management or
response plan, the strategy might also include a response component that spells out the
prioritized activities that the organisation would undertake in a disaster. These
activities include emergency responses, like evacuation, situational assessment and
modes of communication.
Conclusion
Typically the business continuity strategy outlines the structure of how to prevent,
respond and recover from a disaster.

It approaches recovery at a macro level and does not dwell on details. This is often
useful in providing an overview to management and allows them to see the “big
picture” for organisational recovery. It is important to gain their approval before we
proceed to decompose the strategy into detailed actionable steps in the plan
development phase of the project.

COMPUTER FORENSICS

Computer Forensics is a scientific method of investigation and analysis in order to


gather evidence from digital devices or computer networks and components which is
suitable for presentation in a court of law or legal body. It involves performing a
structured investigation while maintaining a documented chain of evidence to find out
exactly what happened on a computer and who was responsible for it.
TYPES
 Disk Forensics: It deals with extracting raw data from the primary or secondary
storage of the device by searching active, modified, or deleted files.
 Network Forensics: It is a sub-branch of Computer Forensics that involves
monitoring and analyzing the computer network traffic.
 Database Forensics: It deals with the study and examination of databases and
their related metadata.
 Malware Forensics: It deals with the identification of suspicious code and
studying viruses, worms, etc.
 Email Forensics: It deals with emails and their recovery and analysis, including
deleted emails, calendars, and contacts.
 Memory Forensics: Deals with collecting data from system memory (system
registers, cache, RAM) in raw form and then analyzing it for further
investigation.
 Mobile Phone Forensics: It mainly deals with the examination and analysis of
phones and smartphones and helps to retrieve contacts, call logs, incoming, and
outgoing SMS, etc., and other data present in it.

CHARACTERISTICS
 Identification: Identifying what evidence is present, where it is stored, and how
it is stored (in which format). Electronic devices can be personal computers,
Mobile phones, PDAs, etc.
 Preservation: Data is isolated, secured, and preserved. It includes prohibiting
unauthorized personnel from using the digital device so that digital evidence,
mistakenly or purposely, is not tampered with and making a copy of the
original evidence.
 Analysis: Forensic lab personnel reconstruct fragments of data and draw
conclusions based on evidence.
 Documentation: A record of all the visible data is created. It helps in recreating
and reviewing the crime scene. All the findings from the investigations are
documented.
 Presentation: All the documented findings are produced in a court of law for
further investigations.
PROCEDURE:
The procedure starts with identifying the devices used and collecting the preliminary
evidence on the crime scene. Then the court warrant is obtained for the seizure of the
evidence which leads to the seizure of the evidence. The evidence are then transported
to the forensics lab for further investigations and the procedure of transportation of the
evidence from the crime scene to labs are called chain of custody. The evidence are
then copied for analysis and the original evidence is kept safe because analysis are
always done on the copied evidence and not the original evidence.
The analysis is then done on the copied evidence for suspicious activities and
accordingly, the findings are documented in a nontechnical tone. The documented
findings are then presented in a court of law for further investigations.
Some Tools used for Investigation:
Tools for Laptop or PC –
 COFFEE – A suite of tools for Windows developed by Microsoft.
 The Coroner’s Toolkit – A suite of programs for Unix analysis.
 The Sleuth Kit – A library of tools for both Unix and Windows.
Tools for Memory :
 Volatility
 WindowsSCOPE
Tools for Mobile Device :
 MicroSystemation XRY/XACT
APPLICATIONS
 Intellectual Property theft
 Industrial espionage
 Employment disputes
 Fraud investigations
 Misuse of the Internet and email in the workplace
 Forgeries related matters
 Bankruptcy investigations
 Issues concerned the regulatory compliance
Advantages of Computer Forensics :
 To produce evidence in the court, which can lead to the punishment of the
culprit.
 It helps the companies gather important information on their computer systems
or networks potentially being compromised.
 Efficiently tracks down cyber criminals from anywhere in the world.
 Helps to protect the organization’s money and valuable time.
 Allows to extract, process, and interpret the factual evidence, so it proves the
cybercriminal action’s in the court.
Disadvantages of Computer Forensics :
 Before the digital evidence is accepted into court it must be proved that it is not
tampered with.
 Producing and keeping electronic records safe is expensive.
 Legal practitioners must have extensive computer knowledge.
 Need to produce authentic and convincing evidence.
 If the tool used for digital forensics is not according to specified standards, then
in a court of law, the evidence can be disapproved by justice.
 A lack of technical knowledge by the investigating officer might not offer the
desired result.

TEAM ESTABLISHMENT

 Shared goal. Members move in the same direction knowing what they’re
working toward and why they’re there.
 Curious and adaptable. People are open to learning new things and adapt
quickly to changing circumstances and new information.
 Trust and commitment. Members hold each other accountable and trust each
other to do their work and look out for the team's interests.
 Diverse. Diversity of experiences, backgrounds, and even locations and work
status (e.g., employee versus independent talent) provides the perspectives,
knowledge, and creativity required to solve problems well.
 Open communication. Everyone feels safe being authentic and constructively
shares their concerns and feedback.
 Inclusive. Members respect each other's perspectives, feel heard and safe
enough to take risks and be vulnerable.
 Complementary skillsets. Members have the skills and knowledge to deliver on
their responsibilities.
14 steps to building a successful team
The more you can rely on your team to regularly deliver remarkable work, the more
comfortable you may feel taking on greater responsibilities and launching bigger
initiatives.
Remember that great teams consist of anyone required to get the work done. This may
be a mix of employees, independent talent, consultants, agencies, and people working
remotely and onsite. Here’s how to create an environment that enables everyone to
contribute at their highest potential.
1. Set business goals
Setting goals provides your team a framework by:
 Giving them purpose, which may increase their engagement, motivation, and
productivity
 Aligning their work with business goals
 Informing them what the team’s structure should look like, roles required,
people’s responsibilities, and skillsets needed
 Identifying hiring priorities, such as when specific skills may be required and
for how long you’ll need them
 Reducing risk by flagging potential challenges like the equipment and
processes needed for a project
2. Define roles and skillsets required
Now that you know what your goals are, you can determine the skillsets required to
achieve them. Knowing each person’s responsibilities will also guide you in writing
accurate job descriptions and determining what success looks like for each person.
You may also identify what work should be handled by independent talent versus an
employee so that you can effectively allocate resources. For example, a content team is
made up of people managing the operations and people producing the content. You
may find the most efficient way to generate quality content at a reliable pace is by
contracting independent writers and graphic designers.

3. Maximize the skills of each team member


The objective of this step is to get the best work out of people by utilizing their
strengths to the fullest. Regularly review the capabilities of each team member,
including their strengths and weaknesses. Then determine where people have
complementary skills. Knowing who can back up another person and the type of work
someone does well and enjoys most may reduce their stress levels.
Knowing each person’s strengths and interests may also show where to invest in
learning and development (L&D). Workers, especially the younger generations, often
appreciate companies that invest in their career growth. They may show their
appreciation by staying in their jobs longer and working harder.
If you don’t have a formal L&D program, that’s OK. You can show employees you
care about their career growth by paying for an online training course or webinar,
pairing them up with a mentor, or sending them to a conference.
Related: 7 Ways To Ensure Learning Opportunities for Remote Teams
4. Set expectations from day one
Every team member should know what’s expected from them, their deadlines, the
support you’ll provide, the processes available to facilitate their work, and how you’ll
evaluate their success. They should also know what doing a good job looks like.
Setting expectations includes how they should communicate. In addition to
establishing respectful communication guidelines and using inclusive language, you
can improve team communication by proactively addressing questions like how
quickly should they respond to emails? When should they have conversations over the
phone versus on a video call? Is it OK to turn all email, text, and messaging
notifications off after work hours?
5. Embrace diversity
Studies have long established how diversity exposes people to different perspectives,
which can lead to new ways of thinking. And those new thoughts may result in greater
innovation, faster problem-solving, and deeper customer connections.
Working with people from different backgrounds, beliefs, lifestyles, and life
experiences also benefits the individual on a personal level. When teams have regular
exposure to people outside of their normal circle, they become more aware of their
own biases and stereotypes.
As people become more self-aware, their minds may open. Open minds then
lead to open hearts as they begin communicating and collaborating with greater
empathy and adaptability.
Whether for business or personal gain, there’s no downside to building diverse teams.
In fact, a Pew Research Center survey shows most U.S. employees (56%) support
more efforts toward it.
6. Allow your team to take risks and experiment
Taking risks helps your team grow and find creative solutions to problems. When
given the opportunity to test ideas and fail, they may find a way to do something
better, faster, or cheaper. They may even uncover untapped opportunities.
Some of their experiments may fizzle and that’s OK too; they’re still stretching
themselves and growing from the experience. And they may be able to apply some of
those learnings to future projects, which increases their chances of success.
If you’re thinking, “There’s no time to test ideas, we’re scrambling just to get our
regular work done!” there’s a solution. See how PGA of America finds the time and
budget for testing new ideas.
7. Give authentic recognition
Everyone wants to feel their work matters and that they’re appreciated. Generally, the
more sincere and frequent the recognition, the more engaged and excited a person feels
about their job.
Although there are many ways to recognize someone—an award, money, or promotion
—saying a quick thank you during a weekly stand-up meeting may be enough.
Whatever you do, be authentic as most employees (64%) prefer authenticity over
frequency.
So when giving a pat on the back, don’t just say, “Hey, good job last week.” Specify
the person, what they did, and how their work made a difference or provided value to
the organization.
Read: How To Show Appreciation to Employees: 10 Simple Ways With Big Impact
8. Promote individual development
Over the last several years, companies have had a tough time keeping and attracting
people with in-demand skills. Trends suggest that it’s going to get tougher, which is
why a LinkedIn Workplace report shows 81% of companies are leaning on their
learning and development departments (L&D) for help.
Effective talent development strategies include:
 Offering professional training on the latest technologies and tools
 Connecting team members with coaches to level up their communication
or management skills
 Mentoring employees by teaming them with leaders from other functions
 Giving time, and possibly funds, to pursue relevant work certifications and
higher education
 Having them work with external talent on a project to learn new skills and ways
of thinking
9. Don’t micromanage
According to a survey by the American Psychological Association, people who feel
micromanaged at work are more tense and stressed than those who are given their
space, to the tune of 64% vs. 36%. So let your team complete tasks with the
appropriate level of autonomy.
Be the person they feel comfortable going to when they want guidance or feedback. If
you hire the right people and have the right processes in place, they will get their work
done without you staring over their shoulders all day. You can check in during weekly
standup meetings or monthly 1:1s, depending on the work and role. And here’s a side
bonus: when you give people more freedom, you have more time to get your own work
done.

10. Motivate your team with positivity


Your team will go through many ups and downs throughout the year. The more
positive their work environment, the more resilient they may be to change and stress.
And the faster they may recover when something goes wrong.
Many of the top ways to promote workplace positivity are the same characteristics and
practices that build successful teams. You must honor each person’s individual needs
and provide an emotionally and physically healthy place to work, as well as
opportunities for growth. The U.S. Surgeon General calls these the “five essentials”
and believes they’re so critical for workplace well-being that he created a framework
for achieving them.

11. Establish strong leadership


To paraphrase author Simon Sinek, the difference between a manager and a leader is
that managers are responsible for the job and leaders are responsible for the people
who are responsible for the job. What he means is that managers must know how to get
the job done. Leaders must know how to inspire and motivate people to do their best
work, excel through challenging times, and keep everyone connected and moving
toward a common goal.
Being a strong leader requires looking inward first. You must know how to leverage
what you do well and recognize areas that you can improve on. Be the example by
holding yourself accountable when you make mistakes or go back on decisions that
didn’t work out. You may find it difficult to be vulnerable enough to admit fault or ask
for help, but it gets easier over time. So ask your team where you could improve and
regularly solicit feedback. As teams see you do the work toward personal and
professional growth, they’re encouraged to do the work too.
12. Create a team culture
Every company has a unique culture. And teams create their own subcultures based on
the leader, the team members, and the work they do. Team cultures became more
critical to a team’s success in the wake of the COVID-19 pandemic. For most people
working from home at that time, their immediate team was their main connection to
the company. When offices opened back up, subcultures remained strong and still
heavily influence how teams perform today.
What that time in history also proved was you don’t have to be sitting shoulder-to-
shoulder to create a strong team culture. If you’re intentionally inclusive, remote
workers can feel just as valued and connected as their onsite colleagues.
13. Foster connections within the team
Connections bring more humanity, and sometimes a little levity, to stressful workdays.
When people feel a closer bond with each other, they develop trust. That trust is what
encourages them to collaborate and communicate more honestly with each other. If
they trust someone’s work, they may be willing to lend a helping hand. If they trust
what someone says, they may be more willing to resolve an issue when it arises.
Here are a few ways to start fostering team connections. Be mindful that people have
different comfort levels when talking about their personal lives. You can encourage but
never force someone to share or attend an event:
 Start team meetings with a few minutes of casual conversation before launching
into business
 Have a fun channel on Slack where the team can share non-related work
thoughts and pictures
 Practice open and transparent communication
 Assign new hires to an onboarding buddy
 Host regular social events
 Encourage employees to recognize and thank each other publicly via a shared
team channel or platform

You might also like