0% found this document useful (0 votes)
28 views72 pages

IFS 504 Lecture Note

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
28 views72 pages

IFS 504 Lecture Note

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 72

IFS 504: IT SECURITY AND RISK MANAGEMENT

Introduction to Information Security


The internet is not a single network, but a worldwide collection of loosely
connected networks that are accessible by individual computer hosts, in a
variety of ways, to anyone with a computer and a network connection. Thus,
individuals and organizations can reach any point on the internet without regard
to national or geographic boundaries or time of day. However, along with the
convenience and easy access to information come risks. Among them are the
risks that valuable information will be lost, stolen, changed, or misused. If
information is recorded electronically and is available on networked computers,
it is more vulnerable than if the same information is printed on paper and locked
in a file cabinet. Intruders do not need to enter an office or home; they may not
even be in the same country. They can steal or tamper with information without
touching a piece of paper or a photocopier. They can also create new electronic
files, run their own programs, and hide evidence of their unauthorized activity.
Basic Security Concepts Three basic security concepts important to information
on the internet are confidentiality, integrity, and availability. Concepts relating
to the people who use that information are authentication, authorization, and
nonrepudiation.
When information is read or copied by someone not authorized to do so, the
result is known as loss of confidentiality. For some types of information,
confidentiality is a very important attribute. Examples include research data,
medical and insurance records, new product specifications, and corporate
investment strategies. In some locations, there may be a legal obligation to
protect the privacy of individuals. This is particularly true for banks and loan
companies; debt collectors; businesses that extend credit to their customers or
issue credit cards; hospitals, doctors’ offices, and medical testing laboratories;
individuals or agencies that offer services such as psychological counseling or
drug treatment; and agencies that collect taxes. Information can be corrupted
when it is available on an insecure network.
When information is modified in unexpected ways, the result is known as loss
of integrity. This means that unauthorized changes are made to information,
whether by human error or intentional tampering. Integrity is particularly
important for critical safety and financial data used for activities such as
electronic funds transfers, air traffic control, and financial accounting.
Information can be erased or become inaccessible, resulting in loss of

1
availability. This means that people who are authorized to get information
cannot get what they need. Availability is often the most important attribute in
service-oriented businesses that depend on information (for example, airline
schedules and online inventory systems).
Availability of the network itself is important to anyone whose business or
education relies on a network connection. When users cannot access the
network or specific services provided on the network, they experience a denial
of service. To make information available to those who need it and who can be
trusted with it, organizations use authentication and authorization.
Authentication is proving that a user is the person he or she claims to be. That
proof may involve something the user knows (such as a password), something
the user has (such as a “smartcard”), or something about the user that proves the
person’s identity (such as a fingerprint).
Authorization is the act of determining whether a particular user (or computer
system) has the right to carry out a certain activity, such as reading a file or
running a program. Authentication and authorization go hand in hand. Users
must be authenticated before carrying out the activity they are authorized to
perform. Security is strong when the means of authentication cannot later be
refuted—the user cannot later deny that he or she performed the activity. This is
known as nonrepudiation.
These concepts of information security also apply to the term information
security; that is, internet users want to be assured that • they can trust the
information they use • the information they are responsible for will be shared
only in the manner that they expect • the information will be available when
they need it • the systems they use will process information in a timely and
trustworthy manner In addition, information assurance extends to systems of all
kinds, including large-scale distributed systems, control systems, and embedded
systems, and it encompasses systems with hardware, software, and human
components. The technologies of information assurance address system
intrusions and compromises to information.
It is remarkably easy to gain unauthorized access to information in an insecure
networked environment, and it is hard to catch the intruders. Even if users have
nothing stored on their computer that they consider important, that computer
can be a “weak link,” allowing unauthorized access to the organization’s
systems and information. Seemingly innocuous information can expose a
computer system to compromise. Information that intruders find useful includes
which hardware and software are being used, system configuration, type of
network connections, phone numbers, and access and authentication procedures.

2
Security-related information can enable unauthorized individuals to access
important files and programs, thus compromising the security of the system.
Examples of important information are passwords, access control files and keys,
personnel information, and encryption algorithms. No one on the internet is
immune. Those affected include banks and financial companies, insurance
companies, brokerage houses, consultants, government contractors,
government agencies, hospitals and medical laboratories, network service
providers, utility companies, the textile business, universities, and wholesale
and retail trades. The consequences of a break-in cover a broad range of
possibilities: a minor loss of time in recovering from the problem, a decrease in
productivity, a significant loss of money or staff-hours, a devastating loss of
credibility or market opportunity, a business no longer able to compete, legal
liability, and the loss of life. Individuals may find that their credit card, medical,
and other private information has been compromised. Identity theft can affect
anyone.

Intrusion Detection Systems

The purpose of an intrusion detection system (IDS) is to protect the


confidentiality, integrity, and availability of a system. Intrusion detection
systems (IDS) are designed to detect specific issues, and are categorized as
signature-based (SIDS) or anomaly-based (AIDS). IDS can be software or
hardware. How do SIDS and AIDS detect malicious activity? What is the
difference between the two? What are the four IDS evasion techniques
discussed, and how do they evade an IDS?

Types of Computer Attacks

Cyber-attacks can be categorized based on the activities and targets of the


attacker. Each attack type can be classified into one of the following four
classes (Sung & Mukkamala, 2003):

i. Denial-of-Service (DoS) attacks have the objective of blocking or


restricting services delivered by the network, computer to the users.
ii. Probing attacks have the objective of acquisition of information about the
network or the computer system.
iii. User-to-Root (U2R) attacks have the objective of a non-privileged user
acquiring root or admin-user access on a specific computer or a system on
which the intruder had user-level access.

3
iv. Remote-to-Local (R2L) attacks involve sending packets to the victim
machine. The cybercriminal learns the user's activities and obtains
privileges which an end-user could have on the computer system.

Within these broad categories, there are many different forms of computer
attacks. A summary of these attacks with a brief explanation, characteristics,
and examples are presented in Table 1.

Table 1 Classes of computer attacks

Types of
Explanation Example
Attack
Long URL strings are a
Buffer Attacks the buffer's boundaries and overwrites
common input. Cowan,
Overflow memory area.
et al. (1998)
Reproduces itself on the local host or through SQL Slammer, Mydoom,
Worm
the network. CodeRed Nimda.
Programs appear attractive and genuine, but Zeus, SpyEye Alazab, et
Trojan
have malicious code embedded inside them. al. (2013)
A security event to disrupt the network
Buffer overflow, Ping of
Denial of services. It is started by forcing reset on the
death (PoD), TCP SYN,
service target computers. The users can no longer
smurf, teardrop Zargar,
(DoS) connect to the system because of
et al. (2013)
unavailability of service.
Common
Gateway The attacker takes advantage of CGI scripts to
Phishing email;
Interface create an attack by sending illegitimate inputs
Aljawarneh (2016)
(CGI) to the web server.
Scripts
Attacks the limited size of NIDS to handle
Denial of Service (Dos)
huge traffic loads and to investigate for
Traffic or Distributed Denial of
possible intrusions. If a cybercriminal can
Flooding Service (DDoS)
cause congestion in the networks, then NIDS
Zargar, et al. (2013)
will be busy in analyzing the traffic.
Physical Aims to attack the physical mechanisms of the Cold boot, evil maid
Attack computer system. (Pasqualetti et al., 2013).
Aims to break the password within a small A dictionary attack,
Password
time, and is noticed by a sequence of failures Rainbow attack (Das et
Attack
login. al., 2014).
Gathers information or finds weaknesses in
Information System scan, port scan,
computers or networks by sniffing or
Gathering (Bou-Harb et al., 2014).
searching.

4
Types of
Explanation Example
Attack
Intercept packets,
The cybercriminal accesses as a normal user
rainbow attack, social
User to Root in the beginning and then upgrades to a super-
engineering Rootkit, load
(U2R) attack user which may lead to exploitation of several
module, (Perl
vulnerabilities of the system.
Raiyn, 2014).
Warezclient, ftp write,
Remote to The cybercriminal sends packets to a remote
multihop,phf, spy,
Local (R2L) system by connecting to the network without
warezmaster, imap
attack having an account on the system.
(Raiyn, 2014).
Identifying the valid IP addresses by scanning Sweep, portsweep (So-In
Probe
the network to gather host data packets. et al., 2014)

Classification of Intrusion Detection Systems

Intrusion detection systems are designed to be deployed in different


environments. And like many cybersecurity solutions, an IDS can either be
host-based or network-based.

Host-Based IDS (HIDS): A host-based IDS is deployed on a particular


endpoint and designed to protect it against internal and external threats. Such an
IDS may have the ability to monitor network traffic to and from the machine,
observe running processes, and inspect the system’s logs. A host-based IDS’s
visibility is limited to its host machine, decreasing the available context for
decision-making, but has deep visibility into the host computer’s internals.

Network-Based IDS (NIDS): A network-based IDS solution is designed to


monitor an entire protected network. It has visibility into all traffic flowing
through the network and makes determinations based upon packet metadata and
contents. This wider viewpoint provides more context and the ability to detect
widespread threats; however, these systems lack visibility into the internals of
the endpoints that they protect.

Due to the different levels of visibility, deploying a HIDS or NIDS in isolation


provides incomplete protection to an organization’s system. A unified threat
management solution, which integrates multiple technologies in one system, can
provide more comprehensive security.

5
Detection Method of IDS Deployment

Beyond their deployment location, IDS solutions also differ in how they
identify potential intrusions:

 Signature Detection: Signature-based IDS solutions use fingerprints of


known threats to identify them. Once malware or other malicious content
has been identified, a signature is generated and added to the list used by
the IDS solution to test incoming content. This enables an IDS to achieve
a high threat detection rate with no false positives because all alerts are
generated based upon detection of known-malicious content. However, a
signature-based IDS is limited to detecting known threats and is blind to
zero-day vulnerabilities.

 Anomaly Detection: Anomaly-based IDS solutions build a model of the


“normal” behaviour of the protected system. All future behaviour is
compared to this model, and any anomalies are labeled as potential
threats and generate alerts. While this approach can detect novel or zero-
day threats, the difficulty of building an accurate model of “normal”
behaviour means that these systems must balance false positives
(incorrect alerts) with false negatives (missed detections).

 Hybrid Detection: A hybrid IDS uses both signature-based and


anomaly-based detection. This enables it to detect more potential attacks
with a lower error rate than using either system in isolation.
IDS versus Firewalls

Intrusion Detection Systems and firewalls are both cybersecurity solutions that
can be deployed to protect an endpoint or network. However, they differ
significantly in their purposes.

An IDS is a passive monitoring device that detects potential threats and


generates alerts, enabling security operations center (SOC) analysts or incident
responders to investigate and respond to the potential incident. An IDS provides
no actual protection to the endpoint or network. A firewall, on the other hand, is
designed to act as a protective system. It performs analysis of the metadata of

6
network packets and allows or blocks traffic based upon predefined rules. This
creates a boundary over which certain types of traffic or protocols cannot pass.

Since a firewall is an active protective device, it is more like an Intrusion


Prevention System (IPS) than an IDS. An IPS is like an IDS but actively blocks
identified threats instead of simply raising an alert. This complements the
functionality of a firewall, and many next-generation firewalls (NGFWs) have
integrated IDS/IPS functionality. This enables them to both enforce the
predefined filtering rules (firewalls) and detect and respond to more
sophisticated cyber threats (IDS/IPS

Selecting an IDS Solution

An IDS is a valuable component of any organization’s cybersecurity


deployment. A simple firewall provides the foundation for network security, but
many advanced threats can slip past it. An IDS adds an additional line of
defense, making it more difficult for an attacker to gain access to an
organization’s network undetected.

When selecting an IDS solution, it is important to carefully consider the


deployment scenario. In some cases, an IDS may be the best choice for the job,
while, in others, the integrated protection of an IPS may be a better option.
Using a NGFW that has built-in IDS/IPS functionality provides an integrated
solution, simplifying threat detection and security management.

Threat Assessment is a fact-based, systematic process designed to IDENTIFY,


INQUIRE, ASSESS, and MANAGE potentially dangerous or violent situations.
A key goal is to distinguish between an individual who MAKES a threat versus
one who POSES a threat.

What is Threat Assessment? What is it NOT?

The first step in creating and implementing the Threat Assessment process in
your school is to have a clear idea of the purpose, capabilities, and limitations of
threat assessment. In other words, knowing what it IS, and what it IS NOT.

7
Threat Assessment IS: Threat Assessment IS NOT:

A fact-based, investigative approach to A simple checklist of


determine how likely a person is to carry out a warning signs or red flags
threat of violence. (Safe School Initiative Study, used to remove a student
2002) from school.

A means to identify, assess, and manage A means to label a student


individuals who are at risk for violence against as a troublemaker and enact
themselves or others. consequences.

A way to identify someone who has the potential


for violence in many forms - self-harm, assault, A means to find "the next
risk taking behaviours, suicide, substance abuse, school shooter".
and other aggressive or dangerous behaviours.

Threat Assessment is a fact-based, systematic process designed to IDENTIFY,


INQUIRE, ASSESS, and MANAGE potentially dangerous or violent situations.
A key goal is to distinguish between an individual who MAKES a threat versus
one who POSES a threat.
 Identify the person or situation whose behaviour has raised concern
about potential violence.
 Inquire, ask questions, and gather additional relevant information about
the person and situation.
Note: The focus of threat assessment is to understand the situation and
how best to mitigate safety concerns. It is not the same as a criminal or
disciplinary investigative process.
 Assess the person and situation based on the totality of the information
that is reasonably available, to determine whether the person or situation
poses a threat of violence or harm to others and/or self.
 Manage the threat by implementing an intervention, supervision, and/or
monitoring plan to prevent harm where possible, and to reduce and
mitigate the impact of the situation.

8
Threatening and other disturbing behaviour can come in a variety of forms. A
threat may be:
 Expressed or communicated verbally, behaviourally, visually, in writing,
electronically, or through other means.
 Expressed directly or indirectly.
 Issued by someone known or unknown to the target.

Threat Assessment teams and programs are designed to address any behaviour
or communication that raises concern that a person or situation may pose a
danger to the safety of the school, campus, or workplace.

Now that we’ve briefly examined what threat assessment is, let’s identify what
it is not. Threat assessment is not a simple checklist of warning signs or red
flags that an administrator or school counselor completes based on a single
threat or incident. Threat assessment examines the whole picture, not just an
isolated event. The use of threat assessment principles is not a means to kick
kids out of school or label them as troublemakers, but instead to craft a plan for
effectively intervening and managing the individual.

Perhaps most importantly, threat assessment is not just about “finding the next
school shooter”. It goes far beyond just that single purpose. Threat assessment
can assist schools in identifying and intervening with a wide range of troubling
or potentially violent situations.

It is also important to understand the distinction between threat assessments and


vulnerability assessments as these terms are often used interchangeably and
incorrectly. A vulnerability assessment (sometimes called a site survey or
security audit) deals with things, not people. It focuses on the facility, policies,
and procedures, not individuals. A vulnerability assessment should be scheduled
and conducted on a periodic basis to examine the security of the physical plant,
the daily operational practices, and to detect potential vulnerabilities or risks.
The vulnerability identification process enables you to identify and understand
weaknesses in your system, underlying infrastructure, support systems, and
major applications. It allows you to analyze the potential exposures generated
by your supply chain and your business partners.
A security vulnerability in an application is a weak spot that might be exploited
by a security threat. Risks are the potential consequences and impacts of
unaddressed vulnerabilities.

9
Impact of Security Breaches

Security breaches affect organizations in a variety of ways. They often result in


the following:
 Loss of revenue
 Damage to the reputation of the organization
 Loss or compromise of data
 Interruption of business processes
 Damage to customer confidence
 Damage to investor confidence
 Legal Consequences -- In many states/countries, legal consequences are
associated with the failure to secure the system—for example, Sarbanes
Oxley, HIPAA, GLBA, California SB 1386.
 Security breaches can have far-reaching effects. When there is a
perceived or real security weakness, the organization must take
immediate action to ensure that the weakness is removed and the
damage is limited.

Security Risk Management processes:


 Identify security threat (Information Disclosure, Denial of Service, and
Tampering with data)
 Analyze & Prioritize Security Risks
 Develop Security remediation (fix, configuration changes, apply
security patches etc)
 Test Security Remediation
 Reassess the security vulnerability after changes to an application like
patch application or upgrade to higher version.

What are the sources to identify security vulnerability within an application?


 National vulnerability database https://nvd.nist.gov/
 Common Vulnerabilities and Exposures https://cve.mitre.org/
 Vendor website Security Bulletin
 Security scan of an application using third party tools.
 Application testing and observations
 Regularly monitoring security vulnerabilities in related applications or
environments (Operating System, Database, Third party libraries etc).

National Security Database (NVD):

NVD is the U.S. government repository of standards based vulnerability


management data. This data enables automation of vulnerability management,
security measurement, and compliance (e.g. FISMA).

10
Common Vulnerabilities and Exposures https://cve.mitre.org/

Common Vulnerabilities and Exposures (CVE®) is a dictionary of common


names (i.e., CVE Identifiers) for publicly known information security
vulnerabilities.
CVE’s common identifiers make it easier to share data across separate network
security databases and tools, and provide a baseline for evaluating the coverage
of an organization’s security tools. If a report from one of your security tools
incorporates CVE Identifiers, you may then quickly and accurately access fix
information in one or more separate CVE-compatible databases to remediate the
problem.
The mission of the CVE® Program is to identify, define, and catalog publicly
disclosed cybersecurity vulnerabilities.

The MITRE Corporation maintains CVE and this public Web site, manages the
compatibility program, oversees the CVE Naming Authorities, and provides
impartial technical guidance to the CVE Editorial Board throughout the process
to ensure CVE serves the public interest. MITRE is not an acronym but is a
thought-about company name to represent the substantial cybersecurity
knowledge base funded by NIST (National Institute of Standards and
Technology). But the framework that it released by the name of ATT&CK
stands for Adversary Tactics, Techniques and Common Knowledge.

MITRE ATT&CK in cyber security stands for MITRE Adversarial Tactics,


Techniques, and Common Knowledge (ATT&CK). The MITRE ATT&CK
framework is a curated knowledge base and model for cyber adversary
behaviour, reflecting the various phases of an adversary's attack lifecycle and
the platforms they are known to target.

Application scan using third party tool for security vulnerability.


 HP Fortify WebInspect
 IBM Security AppScan
 TripWire WebApp360
 Rapid7 AppSpider
The security scan tool provide the security vulnerability report which identifies:
 Prioritizes the security vulnerability (Low, Medium, High, critical).
 Classify the security vulnerability (Cross Site Scripting, SQL Injection
Detection, Encryption not enforced).
 Details the vulnerability identifying web pages affected by the
vulnerability.
 Suggest the solution.

Application Testing/observations.

11
How would customer remediate the security vulnerability?
No matter how you detect the security vulnerability, customer should get the
security vulnerability assessed by their security team.
If the security team confirms that it is security threat to the product, open the
ticket with IBM detailing the security vulnerability and supporting
documentation.

Cross-Site Scripting

Cross-site scripting is, describe the different varieties of cross-site scripting


vulnerabilities, and spell out how to find and prevent cross-site scripting.

What is cross-site scripting (XSS)?

Cross-site scripting (also known as XSS) is a web security vulnerability that


allows an attacker to compromise the interactions that users have with a
vulnerable application. It allows an attacker to circumvent the same origin
policy, which is designed to segregate different websites from each other.
Cross-site scripting vulnerabilities normally allow an attacker to masquerade as
a victim user, to carry out any actions that the user is able to perform, and to
access any of the user's data. If the victim user has privileged access within the
application, then the attacker might be able to gain full control over all of the
application's functionality and data.

How does XSS work?

Cross-site scripting works by manipulating a vulnerable web site so that it


returns malicious JavaScript to users. When the malicious code executes inside
a victim's browser, the attacker can fully compromise their interaction with the
application.

What are the types of XSS attacks?

There are three main types of XSS attacks. These are:

 Reflected XSS, where the malicious script comes from the current HTTP
request.
12
 Stored XSS, where the malicious script comes from the website's
database.
 DOM-based XSS, where the vulnerability exists in client-side code rather
than server-side code.

Reflected cross-site scripting

Reflected XSS is the simplest variety of cross-site scripting. It arises when an


application receives data in an HTTP request and includes that data within the
immediate response in an unsafe way.

Here is a simple example of a reflected XSS vulnerability:


https://insecure-website.com/status?message=All+is+well.
<p>Status: All is well.</p>

The application doesn't perform any other processing of the data, so an attacker
can easily construct an attack like this:
https://insecure-
website.com/status?message=<script>/*+Bad+stuff+here...+*/</script>
<p>Status: <script>/* Bad stuff here... */</script></p>

If the user visits the URL constructed by the attacker, then the attacker's script
executes in the user's browser, in the context of that user's session with the
application. At that point, the script can carry out any action, and retrieve any
data, to which the user has access.

Stored Cross-Site Scripting

Stored XSS (also known as persistent or second-order XSS) arises when an


application receives data from an untrusted source and includes that data within
its later HTTP responses in an unsafe way.

The data in question might be submitted to the application via HTTP requests;
for example, comments on a blog post, user nicknames in a chat room, or
contact details on a customer order. In other cases, the data might arrive from

13
other untrusted sources; for example, a webmail application displaying
messages received over SMTP, a marketing application displaying social media
posts, or a network monitoring application displaying packet data from network
traffic.

Here is a simple example of a stored XSS vulnerability. A message board


application lets users submit messages, which are displayed to other users:
<p>Hello, this is my message!</p>

The application doesn't perform any other processing of the data, so an attacker
can easily send a message that attacks other users:
<p><script>/* Bad stuff here... */</script></p>

DOM-based cross-site scripting

DOM-based cross-site scripting is a type of cross-site scripting (XSS) attack


executed within the Document Object Model (DOM) of a page loaded into the
browser. DOM-based XSS (also known as DOM XSS) arises when an
application contains some client-side JavaScript that processes data from an
untrusted source in an unsafe way, usually by writing the data back to the DOM.

In the following example, an application uses some JavaScript to read the value
from an input field and write that value to an element within the HTML:
var search = document.getElementById('search').value;

var results = document.getElementById('results');


results.innerHTML = 'You searched for: ' + search;

If the attacker can control the value of the input field, they can easily construct a
malicious value that causes their own script to execute:
You searched for: <img src=1 onerror='/* Bad stuff here... */'>

In a typical case, the input field would be populated from part of the HTTP
request, such as a URL query string parameter, allowing the attacker to deliver
an attack using a malicious URL, in the same manner as reflected XSS.

14
What can XSS be used for?

An attacker who exploits a cross-site scripting vulnerability is typically able to:

 Impersonate or masquerade as the victim user.


 Carry out any action that the user is able to perform.
 Read any data that the user is able to access.
 Capture the user's login credentials.
 Perform virtual defacement of the web site.
 Inject trojan functionality into the web site.

Impact of XSS vulnerabilities

The actual impact of an XSS attack generally depends on the nature of the
application, its functionality and data, and the status of the compromised user.
For example:

 In a brochureware application, where all users are anonymous and all


information is public, the impact will often be minimal.
 In an application holding sensitive data, such as banking transactions,
emails, or healthcare records, the impact will usually be serious.
 If the compromised user has elevated privileges within the application,
then the impact will generally be critical, allowing the attacker to take full
control of the vulnerable application and compromise all users and their
data.

How to find and test for XSS vulnerabilities?

The vast majority of XSS vulnerabilities can be found quickly and reliably
using Burp Suite's web vulnerability scanner.

Manually testing for reflected and stored XSS normally involves submitting
some simple unique input (such as a short alphanumeric string) into every entry
point in the application, identifying every location where the submitted input is
returned in HTTP responses, and testing each location individually to determine
whether suitably crafted input can be used to execute arbitrary JavaScript. In

15
this way, you can determine the context in which the XSS occurs and select a
suitable payload to exploit it.

Manually testing for DOM-based XSS arising from URL parameters involves a
similar process: placing some simple unique input in the parameter, using the
browser's developer tools to search the DOM for this input, and testing each
location to determine whether it is exploitable. However, other types of DOM
XSS are harder to detect. To find DOM-based vulnerabilities in non-URL-based
input (such as document.cookie) or non-HTML-based sinks (like setTimeout),
there is no substitute for reviewing JavaScript code, which can be extremely
time-consuming. Burp Suite's web vulnerability scanner combines static and
dynamic analysis of JavaScript to reliably automate the detection of DOM-
based vulnerabilities.

Content security policy

Content security policy (CSP) is a browser mechanism that aims to mitigate the
impact of cross-site scripting and some other vulnerabilities. If an application
that employs CSP contains XSS-like behaviour, then the CSP might hinder or
prevent exploitation of the vulnerability. Often, the CSP can be circumvented to
enable exploitation of the underlying vulnerability.

Dangling markup injection

Dangling markup injection is a technique that can be used to capture data cross-
domain in situations where a full cross-site scripting exploit is not possible, due
to input filters or other defenses. It can often be exploited to capture sensitive
information that is visible to other users, including CSRF tokens that can be
used to perform unauthorized actions on behalf of the user.

16
How to prevent XSS attacks?

Preventing cross-site scripting is trivial in some cases but can be much harder
depending on the complexity of the application and the ways it handles user-
controllable data.

In general, effectively preventing XSS vulnerabilities is likely to involve a


combination of the following measures:

 Filter input on arrival. At the point where user input is received, filter
as strictly as possible based on what is expected or valid input.
 Encode data on output. At the point where user-controllable data is
output in HTTP responses, encode the output to prevent it from being
interpreted as active content. Depending on the output context, this might
require applying combinations of HTML, URL, JavaScript, and CSS
encoding.
 Use appropriate response headers. To prevent XSS in HTTP responses
that aren't intended to contain any HTML or JavaScript, you can use
the Content-Type and X-Content-Type-Options headers to ensure that
browsers interpret the responses in the way you intend.
 Content Security Policy. As a last line of defense, you can use Content
Security Policy (CSP) to reduce the severity of any XSS vulnerabilities
that still occur.

Common questions about cross-site scripting

How common are XSS vulnerabilities? XSS vulnerabilities are very common,
and XSS is probably the most frequently occurring web security vulnerability.

How common are XSS attacks? It is difficult to get reliable data about real-
world XSS attacks, but it is probably less frequently exploited than other
vulnerabilities.

17
What is the difference between XSS and CSRF? XSS involves causing a web
site to return malicious JavaScript, while CSRF involves inducing a victim user
to perform actions they do not intend to do.

What is the difference between XSS and SQL injection? XSS is a client-side
vulnerability that targets other application users, while SQL injection is a
server-side vulnerability that targets the application's database.

How do I prevent XSS in PHP? Filter your inputs with a whitelist of allowed
characters and use type hints or type casting. Escape your outputs
with htmlentities and ENT_QUOTES for HTML contexts, or JavaScript
Unicode escapes for JavaScript contexts.

How do I prevent XSS in Java? Filter your inputs with a whitelist of allowed
characters and use a library such as Google Guava to HTML-encode your
output for HTML contexts, or use JavaScript Unicode escapes for JavaScript
contexts.

Cross-site scripting

In this section, we'll explain what cross-site scripting is, describe the different
varieties of cross-site scripting vulnerabilities, and spell out how to find and
prevent cross-site scripting.

What is cross-site scripting (XSS)?

Cross-site scripting (also known as XSS) is a web security vulnerability that


allows an attacker to compromise the interactions that users have with a
vulnerable application. It allows an attacker to circumvent the same origin
policy, which is designed to segregate different websites from each other.
Cross-site scripting vulnerabilities normally allow an attacker to masquerade as
a victim user, to carry out any actions that the user is able to perform, and to
access any of the user's data. If the victim user has privileged access within the

18
application, then the attacker might be able to gain full control over all of the
application's functionality and data.

How does XSS work?

Cross-site scripting works by manipulating a vulnerable web site so that it


returns malicious JavaScript to users. When the malicious code executes inside
a victim's browser, the attacker can fully compromise their interaction with the
application.

Incident Response

In the event that our risk management efforts fail, incident response exists to
react to such events. Incident response should be primarily oriented to the items
that we feel are likely to cause us pain as an organization, which we should now
know based on our risk management efforts. Reaction to such incidents should
be based, as much as is possible or practical, on documented incident response
plans, which are regularly reviewed, tested, and practiced by those who will be
expected to enact them in the case of an actual incident. The actual occurrence
of such an emergency is not the time to (attempt to) follow documentation that
has been languishing on a shelf, is outdated, and refers to processes or systems
that have changed heavily or no longer exists.
The incident response process, at a high level, consists of:

Preparation

Detection and analysis

Containment

Eradication

Recovery

Post incident activity

Preparation

The preparation phase of incident response consists of all of the activities that
we can perform, in advance of the incident itself, in order to better enable us to

19
handle it. This typically involves having the policies and procedures that govern
incident response and handling in place, conducting training and education for
both incident handlers and those who are expected to report incidents,
conducting incident response exercises, developing and maintaining
documentation, and numerous other such activities.
The importance of this phase of incident response should not be underestimated.
Without adequate preparation, it is extremely unlikely that response to an
incident will go well and/or in the direction that we expect it to go. The time
determines what needs to be done, who needs to do it, and how to do it, is not
when we are faced with a burning emergency.

Detection and analysis

The detection and analysis phase is where the action begins to happen in
our incident response process. In this phase, we will detect the occurrence of an
issue and decide whether or not it is actually an incident so that we can respond
to it appropriately.
The detection portion of this phase will often be the result of monitoring of or
alerting based on the output of a security tool or service. This may be output
from an Intrusion Detection System (IDS), Anti Virus (AV) software, firewall
logs, proxy logs, alerting from a Security Information and Event Monitoring
(SIEM) tool if program is internal or Managed Security Service
Provider (MSSP) if program is external, or any of a number of similar sources.

The analysis portion of this phase is often a combination of automation from a


tool or service, usually an SIEM, and human judgment. While we can often use
some sort of thresholding to say that X number of events in a given amount of
time is normal or that a certain combination of events is not normal (two failed
logins followed by a success, followed by a password change, followed by the
creation of a new account, for instance), we will often want human intervention
at a certain point when discussing incident response. Such human intervention
will often involve review of logs output by various security, network, and
infrastructure devices, contact with the party that reported the incident, and
general evaluation of the situation. This can be expensive if you’re running a
team of analysts 24×7 so automation of as many functions as possible is key.
When the incident handler evaluates the situation, they will make a
determination regarding whether the issue constitutes an incident or not, an
initial evaluation as to the criticality of the incident (if any), and contact any
additional resources needed to proceed to the next phase.

20
Containment, Eradication, and Recovery

The containment, eradication, and recovery phase is where the majority of the
work takes place to actually solve the incident, at least in the short term.
Containment involves taking steps to ensure that the situation does not cause
any more damage than it already has, or to at least lessen any ongoing harm. If
the problem involves a malware infected server actively being controlled by
a remote attacker, this might mean disconnecting the server from the network,
putting firewall rules in place to block the attacker, and updating signatures or
rules on an Intrusion Prevention System (IPS) in order to halt the traffic from
the malware.

During eradication, we will attempt to remove the effects of the issue from our
environment. In the case of our malware infected server, we have already
isolated the +system and cut it off from its command and control network. Now
we will need to remove the malware from the server and ensure that it does not
exist elsewhere in our environment. This might involve additional scanning of
other hosts in the environment to ensure that the malware is not present, and
examination of logs on the server and activities from the attacking devices on
the network in order to determine what other systems the infected server had
been in communication with.

With malware, particularly very new malware or variants, this can be a tricky
task to ensure that we have properly completed. The adversary is constantly
developing countermeasures to the most current security tools and
methodologies. Whenever doubt exists as to whether malware or attackers have
been truly evicted from our environment, we should err to the side of caution
while balancing the impact to operations. Each event requires a risk assessment.

Lastly, we need to recover to a better state that were in which we were prior to
the incident, or perhaps prior to the issue started if we did not detect the
problem immediately. This would potentially involve restoring devices or data
from backup media, rebuilding systems, reloading applications, or any of a
number of similar activities.
Additionally we need to mitigate the attack vector that was used. Again, this can
be a more painful task than it initially sounds to be, based on potentially
incomplete or unclear knowledge of the situation surrounding the incident and
what exactly did take place. We may find that we are unable to verify that
backup media is actually clean and free or infection, backup media may be bad
entirely, application install bits may be missing, configuration files may not be
available, and any of a number of similar issues.

21
Post incident activity

Post incident activity, as with preparation, is a phase we can easily overlook, but
should ensure that we do not. In the post incident activity phase, often referred
to as a postmortem (latin for after death), we attempt to determine specifically
what happened, why it happened, and what we can do to keep it from happening
again. This is not just a technical review as policies or infrastructure may need
to be changed. The purpose of this phase is not to point fingers or place blame
(although this does sometimes happen), but to ultimately prevent or lessen the
impact of future such incidents.
Incident response processes can thus be categorized into two specific
approaches, based on the degree to which these triggers are addressed:

Front-loaded prevention—This includes incident response processes that are


designed specifically to collect indications and warning information for the
purpose of early prevention of security attacks. The advantage is that some
attacks might be thwarted by the early focus, but the disadvantage is that the
high rate of false positive responses can raise the costs of incident response
dramatically.

Back-loaded recovery—This includes incident response processes that are


designed to collect information from various sources that can supply tangible,
visible information about attacks that might be under way or completed. This
approach reduces the false positive rates but is not effective in stopping attacks
based on early warning data.

Hybrid incident response processes that attempt to do both front-end and back-
end processing of available information are certainly possible, but the real
decision point is whether to invest the time, resources, and money necessary for
front-loaded prevention. These two types of processes can be illustrated on the
time line of information that becomes available to the security team as an attack
proceeds. For front-loaded prevention, the associated response costs and false
positive rates are high, but the associated risk of missing information that could
signal an attack is lower; for a back-loaded response, these respective values are
the opposite (see Figure 1).

22
Figure 1. Comparison of front-loaded and back-loaded response processes

Combining front-loaded prevention with back-loaded recovery creates a


comprehensive response picture; however, an emphasis on front-loaded
prevention may be worth the increased cost.
Back-loaded incident response might be acceptable for smaller, less-critical
infrastructure components, but for the protection of essential national services
from cyber attack the only reasonable option is to focus on front-end prevention
of problems. By definition, national infrastructure supports essential services;
hence, any process that is designed specifically to degrade these services misses
their essential nature. The first implication is that costs associated with incident
response for national infrastructure prevention will tend to be higher than for
typical enterprise situations. The second implication is that the familiar false
positive metric, found so often in enterprise settings as a cost-cutting measure,
must be removed from the vocabulary of national infrastructure protection
managers.
It is worth suffering through a higher number of false positives to ensure
protection of essential national assets.

Incident Response Basics

Scripting is a critical part of the incident response (IR) process. In this chapter
we will touch on the different elements required to start an IR collection script
as well its analysis counterpart. When starting off there are a number of
decisions that need to be made such as picking which language to use, what
tools need to be carried over to the victim system, and what tools need to be
ready on our analysis system to start diving into collected artifacts. The

23
collection process is critical to the investigation and depending on the size of
your environment, you may only get one convenient shot to collect that data.
Therefore, you want to be as thorough as possible. To state the obvious, you
can’t analyze data that you didn’t collect in the first place. The good news is
that there are a massive amount of tools already built into OS X. This book aims
to use those tools to the best of their abilities so that fewer tools need to be
carried over to the victim system.

Detection

One of the most important steps in the incident response process is the detection
phase. Detection (also called identification) is the phase in which events are
analyzed in order to determine whether these events might comprise a security
incident. Without strong detective capabilities built into the information systems,
the organization has little hope of being able to effectively respond to
information security incidents in a timely fashion. Organizations should have a
regimented and, preferably, automated fashion for pulling events from systems
and bringing those events into the wider organizational context. Often when
events on a particular system are analyzed independently and out of context,
then an actual incident might easily be overlooked. However, with the benefit of
seeing those same system logs in the context of the larger organization, patterns
indicative of an incident might be noticed. An important aspect of this phase of
incident response is that during the detection phase it is determined as to
whether an incident is actually occurring or has occurred. It is a rather common
occurrence for potential incidents to be deemed strange, but innocuous after
further review.

Methodology

Different books and organizations may use different terms and phases
associated with the incident response process; this section will mirror the terms
associated with the examination. Many incident-handling methodologies treat
containment, eradication, and recovery as three distinct steps, as we will in this
book. Other names for each step are sometimes used; the current exam lists a
seven-step lifecycle but curiously omits the first step in most incident handling
methodologies: preparation. Perhaps preparation is implied, like the
identification portion of AAA systems. We will therefore cover eight steps,
mapped to the current exam.

Preparation

The preparation phase includes steps taken before an incident occurs. These
include training, writing incident response policies and procedures, and

24
providing tools such as laptops with sniffing software, crossover cables, original
OS media, removable drives, etc. Preparation should include anything that may
be required to handle an incident or that will make incident response faster and
more effective.

Detection (identification)

One of the most important steps in the incident response process is the detection
phase. Detection, also called identification, is the phase in which events are
analyzed in order to determine whether these events might comprise a security
incident. Without strong detective capabilities built into the information systems,
the organization has little hope of being able to effectively respond to
information security incidents in a timely fashion.

Response (containment)

The response phase, or containment, of incident response is the point at which


the incident response team begins interacting with affected systems and

25
attempts to keep further damage from occurring as a result of the incident.
Responses might include taking a system off the network, isolating traffic,
powering off the system, or other items to control both the scope and severity of
the incident. This phase is also typically where a binary (bit-by-bit) forensic
backup is made of systems involved in the incident. An important trend to
understand is that most organizations will now capture volatile data before
pulling the power plug on a system.

Mitigation (eradication)

The mitigation phase, or eradication, involves the process of understanding the


cause of the incident so that the system can be reliably cleaned and ultimately
restored to operational status later in the recovery phase. In order for an
organization to recover from an incident, the cause of the incident must be
determined. The cause must be known so that the systems in question can be
returned to a known good state without significant risk of the compromise
persisting or reoccurring. A common occurrence is for organizations to remove
the most obvious piece of malware affecting a system and think that is sufficient;
when in reality, the obvious malware may only be a symptom and the cause
may still be undiscovered.
Once the cause and symptoms are determined, the system needs to be restored
to a good state and should not be vulnerable to further impact. This will
typically involve either rebuilding the system from scratch or restoring from a
known good backup.

Reporting

The reporting phase of incident handling occurs throughout the process,


beginning with detection. Reporting must begin immediately upon detection
of malicious activity. Reporting contains two primary areas of focus: technical
and nontechnical reporting. The incident handling teams must report the
technical details of the incident as they begin the incident handling process,
while maintaining sufficient bandwidth to also notify management of serious
incidents. A common mistake is forgoing the latter while focusing on the
technical details of the incident itself, but this is a mistake. Nontechnical stake
holders including business and mission owners must be notified immediately of
any serious incident and kept up to date as the incident-handing process
progresses.

Recovery

The recovery phase involves cautiously restoring the system or systems to


operational status. Typically, the business unit responsible for the system will

26
dictate when the system will go back online. Remember to be cognizant of the
possibility that the infection, attacker, or other threat agent might have persisted
through the eradication phase. For this reason, close monitoring of the system
after it returns to production is necessary. Further, to make the security
monitoring of this system easier, strong preference is given to the restoration of
operations occurring during off-peak production hours.

Remediation

Remediation steps occur during the mitigation phase, where vulnerabilities


within the impacted system or systems are mitigated. Remediation continues
after that phase and becomes broader. For example, if the root-cause analysis
determines that a password was stolen and reused, local mitigation steps could
include changing the compromised password and placing the system back
online. Broader remediation steps could include requiring dual-factor
authentication for all systems accessing sensitive data. We will discuss root-
cause analysis shortly.

Lessons learned

The goal of this phase is to provide a final report on the incident, which will be
delivered to management. Important considerations for this phase should
include detailing ways in which the compromise could have been identified
sooner, how the response could have been quicker or more effective, which
organizational shortcomings might have contributed to the incident, and what
other elements might have room for improvement. Feedback from this phase
feeds directly into continued preparation, where the lessons learned are applied
to improving preparation for the handling of future incidents.

Tasks and responsibilities

The day-to-day tasks you can expect to perform as a security engineer will vary
depending on your company, industry, and the size of your security team.
 Identifying security measures to improve incident response
 Responding to security incidents
 Coordinating incident response across teams
 Performing security assessments and code audits
 Developing technical solutions to security vulnerabilities
 Researching new attack vectors and developing threat models
 Automating security improvements

27
As information security grows in importance across industries, so does the need
for security engineers. This means you can find jobs in health care, finance, non-
profit, government, manufacturing, or retail, to name a few.

Security engineer vs. security analyst: What’s the difference?


Both security analysts and engineers are responsible for protecting their
organization’s computers, networks, and data. While there might be some overlap
in their tasks, these two jobs are distinct.
Security engineers build the systems used to protect computer systems and
networks and track incidents. Security analysts monitor the network to detect
and respond to security breaches. Many security engineers start out as security
analysts.

A risk assessment framework (RAF) is a strategy for prioritizing and sharing


information about the security risks to an information technology (IT)
infrastructure. A good RAF organizes and presents information in a way that
both technical and non-technical personnel can understand.

Risk assessments are a lot like stargazing. You can wave your telescope at the
sky and hope you see something. Or, you can make a plan to focus on specific
areas of the sky where there’s a greater likelihood of spotting a comet flying
past.

Similarly, you can get a general idea of your corporate risk by evaluating risk
events and breaches as they happen. Or, you can proactively use tools and
technology to gain a more informed view of your existing—and yet-to-be-
known— levels of risk.

Risk assessment frameworks empower your company to better assess present


and future risks by offering data that accurately shows its overall risk level and
tolerance. Your risk assessment framework touches many parts of your business,
from informing budgets and planning to helping you create a security-first
corporate culture. Understanding and creating a risk assessment framework is
more straightforward than it sounds, and it’s essential for your company’s
security.

28
What Is a Risk Assessment Framework?

A Risk Assessment Framework (RAF) is a strategy to outline, prioritize, and


communicate risk-related information related to your greater business
infrastructure. This sounds complex, but thankfully, a Risk Assessment
Framework’s goal is to simplify all of your security information so every team
member understands it—whether they have a technical background or not.

RAF is often confused with a similar-sounding concept: risk management


framework. This is understandable given the overlapping language and that both
processes deal with risk. However, the real difference is found in their scope.
Think of risk management as a chair with multiple supporting legs, one of
which is risk assessment. While both processes work together with analysis to
provide a thorough picture of existing risk, having a sound risk management
framework is impossible without an effective RAF. Just like you can’t sit on a
chair with only two legs, it’s vital to establish a robust risk assessment
framework as you strengthen your overall risk management process.

Primary Risk Assessment Types

While there are differences in risk assessment frameworks depending on what


area of risk you are solving for—corporate security, GRC, or information
security—most reputable risk assessment frameworks fall under one of the three
primary types: baseline, issue-based, or continuous. Each assessment type has a
unique primary purpose and accomplishes specific goals.

1. Baseline risk assessments


Baseline risk assessments collect benchmark information to identify and
prioritize existing risks. Here’s a corporate security example to illustrate what a
baseline risk assessment might look like. Imagine your brick-and-mortar
company lacks an alert on its employee entrance behind the building. A baseline
assessment would see that as an unchanging operational flaw and flag it for
improvement.

It also examines how this flaw might affect other baseline operations like sales,
inventory, and employee productivity. The 10,000-foot-view option of these
baseline risk assessments touches almost every function, from people, HR, and
tools to processes, materials, environment, and finances.

29
2. Issue-based risk assessments
While baseline assessments cover problems with regular, consistent processes,
issue-based risk assessments take things a step further. They look at the risk
created as a domino effect from issues identified in the baseline assessment.

Let’s go back to our brick-and-mortar store with the security-free back door.
That back door was just used as an entry point for a successful burglary. Now,
it’s time to run an issue-based assessment and examine how situational changes
contributed to this incident. Have shift changes contributed to the door being
unlocked for longer than usual during business hours? Is the ordinarily
operational security camera by that door out of service?

Answering issue-based questions helps your team implement informed security


changes more confidently. It also signals that you’re ready to consider adopting
the third risk assessment type: continuous risk assessments.

3. Continuous risk assessments


Unlike the first two assessment types, a continuous risk assessment happens on
an ongoing, 24/7 basis. This constant monitoring helps find new risks and better
inform any necessary baseline or issue-based risk assessments.

What does this look like in a practical scenario?

A continuous risk assessment should be run all the time—including before and
after an incident. Continuing the brick-and-mortar store example, a store
manager would likely already have information from a baseline assessment and
recognize their employee entrance is a potential vulnerability. They could then
take steps to fortify that entrance and reduce security risks, like adding a
password keypad or ID scanning or replacing the broken security camera to
deter thieves from entering. If an event does occur, they could use the
information provided by the issue-based assessment to fix the issues that caused
that problem in the first place. Either way, in a continuous assessment, the
potential vulnerability will continue to be monitored and assessed to see if fixes
prevent future incidents or if a new strategy or protocol needs to be
implemented.

The invaluable information from these assessments will mitigate potential risks
before they become events and offer risk event-related information to make
further mitigation efforts more effective.

30
Common Risk Assessment Frameworks

There are many different assessment frameworks available. The one you choose
will depend on your area of risk management and security, your industry, and
the type of risk you need to address. Many respected organizations offer
standardized RAFs for specialized industries to ease the assessment process.

Here are some common frameworks and the industries they serve:

 Factor Analysis of Information Risk (FAIR): FAIR offers best


practices to help executives across all business areas understand and
prevent cyber and operational risk. It explains a company’s stake in
dollars and cents so everyone can understand it, regardless of their role.
 Committee of Sponsoring Organizations of the Treadway
Commission (COSO): COSO’s framework is popular among accounting
agencies, finance firms, and publicly-traded companies, thanks to its
emphasis on internal controls and how they impact more extensive
processes.
 Information Systems Audit and Control Association
(COBIT): COBIT was created by the Information Systems Audit and
Control Association, or ISACA, and is an ideal framework for businesses
that want to improve their IT practices.
 Operationally Critical Threat, Asset, and Vulnerability Evaluation
(OCTAVE): OCTAVE’s risk-based approach is designed for
cybersecurity companies largely independent from corporate oversight.
 Risk Management Guide for Information Technology Systems from
the National Institute of Standards and Technology (NIST): NIST’s
guide focuses on highly technical federal information systems and
organizations.
 Threat Agent Risk Assessment (TARA): TARA’s methodology is well-
suited for integrating into and operating with enterprise-level IT and
defense-related companies.

31
Steps to Build Your First Customized Risk Assessment Framework

Knowing what you should do to assess risk is pointless without knowing how to
do it. So, now that you know the assessment type and RAF your company could
most benefit from, it’s time to build and implement your customized framework
to start seeing results.

Here are five foundational steps your company should take to establish its first
customized risk assessment.

1. Pinpoint and evaluate existing risk


Identifying existing and potential risks, or business process mapping, helps your
company assess how it should deal with risk-related information when it
arises.

 Outlined risk identification processes should include:


 Checklists that use your company’s common risks to set the standard for
future efforts
 SME and stakeholder interviews to better establish and categorize risk
registers
 Data collection to prove the risks your company faces and to outline a
clear path for measurable improvement
 Scenario analysis to define business-related factors that contribute to
higher risk measurements

Once you’ve established your ideal risk identification process, it’s time to
engage your workforce. Ask knowledgeable or seasoned employees from
various departments what risks or control areas are a significant struggle for
them. Use these results to create an internal risk scale to prioritize your more
urgent security needs over less pressing ones.

It may seem obvious, but you won’t be able to address them all at once. Once
you have a list of your risks, you’ll need to prioritize them. Evaluate each one
based on how likely it is to happen and how catastrophic it would be if it
happened. Something that has a high impact should it happen, but a low chance
of taking place isn’t as important to fix as something with an increased
likelihood of occurring and a significant impact if it does. Plot each risk on a
risk matrix with the probability of the event happening along one axis and

32
consequence severity on the other.

From here, you can create a risk matrix—we like the bow-tie method—to
analyze the likelihood and consequences of potential risks.

2. Use your assessment results to define a risk management plan


Risk management strategies are what senior management relies on to manage
and mitigate operational risks, especially after a breach. And without this clear
risk response plan, you might not be able to see other current risks, exposing
your company to future violations. So, how do you determine what this plan
should look like?

Use the risk matrix created in step one to assess which risks pose the most
urgent threat and should be addressed by your risk management strategy first.
Then separate those risks into core functions and non-essential ones. Some risks
are inevitable. Software bugs up. Machinery needs maintenance. Firewalls can
fail. Any business that wants to keep running and growing has to accept an
inherent amount of enterprise risk. You must accept core risks to develop and
maintain operations, though you should try to mitigate them as much as
possible.

Non-essential risks that don’t affect core operations could be eliminated entirely.
However, some risks (like a company accepting risk beyond its tolerance
because its last risk assessment is out-of-date) can be avoided by better
processes. Eliminating small, easy-to-address risks lets you focus on the big
ones without compromising overall security or worrying about other processes
breaking down.

3. Implement your risk management plan using internal security controls


Knowing your existing risks isn’t helpful without an action plan to actively
mitigate them. The easiest way to start the implementation process is to focus
on the internal controls that enforce your security standards. Processes can

33
happen gradually to save time and resources you would otherwise need to
execute an all-in process. For example, tweaking a new hire’s job description or
operation is far simpler than asking a long-time employee to alter their
workload or process.

4. Analyze your data and report the results


Data often tells a story, whether we like that story or not. Keep a record of
changes to new and existing internal controls during implementation to make it
easier to assess the controls as a part of your larger information system and risk
management framework. Tracking changes to risk processes also empowers you
to determine available data and set objective measurements to analyze results
against assessment goals set during the initial evaluation process. To simplify
data collection, analysis, and information sharing, we’re fans of centralizing
your data warehouse in a risk intelligence software solution, like Resolver
(naturally).

5. Review and adjust your assessment process


A rigid risk assessment process might help meet your company’s current needs
for a short period. However, like any business process, it needs adaptability to
meet new or unforeseen risks sustainably. Thankfully, when evaluated and
analyzed, your already-established internal controls and data collection
processes can provide insights to help risk teams make smart, informed
decisions and adjust your assessment as needed.

Build risk assessments for better risk management with Resolver


A well-developed risk assessment framework is incredibly helpful in detailing
and communicating risk-related information your technical and non-technical
employees can understand. This unity helps your team more thoroughly address
existing and potential risks to keep your company safe. However, you can’t rely
on the same assessment framework to be effective across the board.

NIST Risk Management Framework

The Risk Management Framework (RMF) from the National Institute of


Standards and Technology (NIST) provides a comprehensive, repeatable, and
measurable seven-step process organizations can use to manage information
security and privacy risk. It links to a suite of NIST standards and guidelines to
support the implementation of risk management programs to meet the
requirements of the Federal Information Security Modernization Act (FISMA).

34
RMF provides a process that integrates security, privacy, and supply chain risk
management activities into the system development lifecycle, according to NIST.
It can be applied to new and legacy systems, any type of system or technology
including internet of things (IoT) and control systems, and within any type of
organization regardless of size or sector. The seven RMF steps are:

“NIST RMF can be tailored to organizational needs,” Raman says. It is


frequently assessed and updated, and many tools support the standards developed.
It’s vital that IT professionals “understand when deploying NIST RMF it is not
an automated tool, but a documented framework that requires strict discipline to
model risk properly.”

NIST has produced several risk-related publications that are easy to understand
and applicable to most organizations, says Mark Thomas, president of Escoute
Consulting and a speaker for the Information Systems Audit and Control
Association (ISACA). “These references provide a process that integrates
security, privacy, and cyber supply chain risk management activities that assists
in control selection and policy development,” he says. “Sometimes thought of as
guides for government entities, NIST frameworks are powerful reference for
government, private, and public enterprises.”

OCTAVE

The Operationally Critical Threat, Asset, and Vulnerability Evaluation


(OCTAVE), developed by the Computer Emergency Readiness Team (CERT) at
Carnegie Mellon University, is a framework for identifying and managing
information security risks. It defines a comprehensive evaluation method that
allows organizations to identify the information assets that are important to their
goals, the threats to those assets, and the vulnerabilities that might expose those
assets to the threats.

By putting together the information assets, threats, and vulnerabilities,


organizations can begin to understand what information is at risk. With this
understanding, they can design and deploy strategies to reduce the overall risk
exposure of information assets.

Two versions of OCTAVE are available. One is OCTAVE-S, a simplified


methodology designed for smaller organizations that have flat hierarchical

35
structures. The other is OCTAVE Allegro, which is a more comprehensive
framework suitable for large organizations or those that have complex structures.

“OCTAVE is a well-designed risk assessment framework because it looks at


security from a physical, technical, and human resource perspective,” Raman
says. “It identifies assets that are mission-critical for any organization and
uncovers threats and vulnerabilities. However, it can be very complex to deploy
and it solely quantifies from a qualitative methodology.”

The flexibility of the methodology “allows teams from operations and IT to work
together to address the security needs of the organization,” Thomas says.

COBIT

Control Objectives for Information and related Technology (COBIT), from


ISACA, is a framework for IT management and governance. It is designed to be
business focused and defines a set of generic processes for the management of IT.
Each process is defined together with process inputs and outputs, key activities,
objectives, performance measures and an elementary maturity model.

The latest version, COBIT 2019, offers more implementation resources, practical
guidance and insights, as well as comprehensive training opportunities, according
to ISACA. It says implementation is now more flexible, enabling organizations to
customize their governance via the framework.

COBIT is a “high-level framework aligned to IT management processes and


policy execution,” says Ed Cabrera, chief cybersecurity officer at security
software provider Trend Micro and former CISO of the United States Secret
Service. “The challenge is that COBIT is costly and requires high knowledge and
skill to implement.”

The framework “is the only model that addresses the governance and
management of enterprise information and technology, which includes an
emphasis [on] security and risk,” Thomas says. “Although the primary intent of
COBIT is not specifically in risk, it integrates multiple risk practices throughout
the framework and refers to multiple globally accepted risk frameworks.”

36
TARA

Threat Assessment and Remediation Analysis (TARA) is an engineering


methodology used to identify and assess cybersecurity vulnerabilities and deploy
countermeasures to mitigate them, according to MITRE, a not-for-profit
organization that works on research and development in technology domains
including cybersecurity.

The framework is part of a MITRE’s portfolio of systems security engineering


(SSE) practices. “The TARA assessment approach can be described as conjoined
trade studies, where the first trade identifies and ranks attack vectors based on
assessed risk, and the second identifies and selects countermeasures based on
assessed utility and cost,” the organization claims.

Unique aspects of the methodology include use of catalog-stored mitigation


mappings that preselect possible countermeasures for a given range of attack
vectors, and the use of countermeasure strategies based on the level of risk
tolerance.

“This is a practical method to determine critical exposures while considering


mitigations, and can augment formal risk methodologies” to include important
information about attackers that can result in an improved risk profile, Thomas
says.

FAIR
Factor Analysis of Information Risk (FAIR) is a taxonomy of the factors that
contribute to risk and how they affect each other. Developed by Jack Jones,
former CISO of Nationwide Mutual Insurance, the framework is mainly
concerned with establishing accurate probabilities for the frequency and
magnitude of data loss events.

FAIR is not a methodology for performing an enterprise or individual risk


assessment. But it provides a way for organizations to understand, analyze, and
measure information risk. The framework’s components include a taxonomy for
information risk, standardized nomenclature for information-risk terms, a method
for establishing data-collection criteria, measurement scales for risk factors, a

37
computational engine for calculating risk, and a model for analyzing complex
risk scenarios.

FAIR “is one of the only methodologies that provides a solid quantitative model
for information security and operational risk,” Thomas says. “This pragmatic
approach to risks provides a solid foundation to assessing risks in any enterprise.”
However, while FAIR provides a comprehensive definition of threat,
vulnerability, and risk, “it’s not well documented, making it difficult to
implement,” he says.

The model differs from other risk frameworks “in that the focus is on quantifying
risks into actual dollars, as opposed to the traditional ‘high, medium, low’ scoring
of others,” Retrum says. “This is gaining traction with senior leaders and board
members, enabling a more thoughtful business discussion by better quantifying
risks in a meaningful way.”

Security engineering is the process of incorporating security controls into an


information system so that the controls become an integral part of the system's
operational capabilities.

Security Engineering Steps

Security engineering must start early in the application deployment process. In


fact, each step in the application deployment should be started early - security
planning, securing the system, developing the system with security, and testing
the system with security.

 Understand your system architecture


As early as possible, possibly as early as your solution definition phase,
obtain and understand your system architecture.
 Build out your system
Given a solid understanding of your system architecture and the security
requirements, you should build out your systems with all the required
security features enabled as early in the project as possible. For example,
your operational infrastructure, which includes your operating systems,
databases, network and applications servers, should be hardened to
industry best practices or recommendations.
 Build end-to-end test cases
In parallel, you should also identify end-to-end test cases for the system.
The goal of these test cases is to test data entering and flowing through
your system. These test cases should include every integrated application.
38
 Harden the infrastructure and the Sterling Order Management System
Software applications
After the successful completion of your end-to-end tests, you should next
harden your infrastructure starting with the operating systems, database,
and application servers. When that is completed, we recommend that you
harden your network.
 Design your deployment strategy
At the successful completion of this end-to-end test on a hardened
infrastructure on a fully integrated system, you are now ready to design
and then deploy your system into secure network architecture.

Encryption Algorithms
Encryption algorithms are mathematical methods of transforming data into an
unreadable form, using a secret key. These algorithms can be divided into two
categories: symmetric encryption, which uses the same key for both encryption
and decryption (e.g. AES, DES, or RC4), and asymmetric encryption, which
uses a pair of keys (e.g. RSA, ECC, or DH). The implementation of these
algorithms can be done either in hardware or software, depending on the desired
balance between performance, power consumption, and flexibility.

The Advanced Encryption Standard (AES) is a symmetric block cipher chosen


by the U.S. government to protect classified information. AES is implemented
in software and hardware throughout the world to encrypt sensitive data. It is
essential for government computer security, cybersecurity and electronic data
protection.
Data Encryption Standard (DES) is an outdated symmetric key method of data
encryption. It was adopted in 1977 for government agencies to protect sensitive
data and was officially retired in 2005. IBM researchers originally designed the
standard in the early 1970s.
The online RC4 encryption and decryption tool provides online RC4
encryption and decryption test. The input and output supports three formats:
hex, string and .
RSA encryption, in full Rivest-Shamir-Adleman encryption, type of public-key
cryptography widely used for data encryption of e-mail and other digital
transactions over the Internet. RSA is named for its inventors, Ronald L. Rivest,
Adi Shamir, and Leonard M.
RSA is a type of asymmetric encryption, which uses two different but linked
keys. In RSA cryptography, both the public and the private keys can encrypt a
message. The opposite key from the one used to encrypt a message is used to
decrypt it.

39
Elliptic-curve cryptography (ECC) is an approach to public-key cryptography
based on the algebraic structure of elliptic curves over finite fields.
This key can then be used to encrypt subsequent communications using a
symmetric-key cipher. Diffie–Hellman is used to secure a variety of Internet
services.

Encryption Modes
Encryption modes are ways of applying encryption algorithms to data,
depending on the size and structure of the data. These modes can be divided
into two categories: block encryption and stream encryption. Block encryption
uses a block cipher, such as AES or DES, to encrypt data in fixed-size chunks,
such as 128 bits or 256 bits. Stream encryption uses a stream cipher, such as
RC4 or ChaCha20, to encrypt data in a continuous stream. The security and
efficiency of the encryption process is affected by how the encryption mode
handles the initialization vector, padding, and chaining of the data blocks or
streams.

Key Management
Key management is the process of generating, storing, distributing, and
updating the keys used for encryption and decryption. It can be difficult to
manage keys in integrated circuit design due to the limited resources and
capabilities of the hardware devices. Key management includes key generation,
which is the process of creating random and secure keys using sources of
entropy such as PUFs or noise generators. Key storage involves storing them in
a secure and accessible way using memory devices like ROM, EEPROM, or
flash memory. Key distribution is the process of transferring the keys to
intended recipients through communication protocols such as SSL/TLS, SSH,
or NFC. Lastly, key update is changing the keys periodically or after an event
like a breach or compromise.

Security Protocols
Security protocols are sets of rules and procedures that govern the
communication and interaction between different entities in a system, such as
devices, servers, or users. These protocols can provide several functions, such
as authentication by verifying the identity of the entities involved with
passwords, certificates, or biometrics; authorization by granting or denying
access to certain resources or operations based on roles and privileges;
confidentiality by ensuring data exchanged is encrypted and protected from

40
eavesdropping; and integrity by making sure data is not modified or corrupted
during transmission or storage. To achieve these functions, security protocols
may use methods such as access control lists, tokens, policies, encryption
algorithms, encryption modes, key management, checksums, hashes, or digital
signatures.
Security Challenges and Threats
Security challenges and threats are the potential risks and vulnerabilities that
can compromise the security and functionality of integrated circuits and their
data. These can be divided into passive attacks, which involve attempts to
observe or analyze the data or hardware without affecting them (e.g. snooping,
sniffing, or side-channel analysis), and active attacks, which attempt to alter or
manipulate the data or hardware (e.g. injection, modification, deletion, or
physical damage). Countermeasures exist to mitigate these threats, such as
encryption, obfuscation, anti-tamper mechanisms, or security protocols.

Security Protocols
A sequence of operations that ensure protection of data. Used with a
communications protocol, it provides secure delivery of data between two
parties. The term generally refers to a suite of components that work in tandem
(see below). For example, the 802.11i standard provides these functions for
wireless LANs.

TLS, which superseded SSL, is widely used to provide encryption between a


user's browser and a website. Following are the primary functions that a security
protocol may support.

Access Control

Authenticates user identity. Authorizes access to specific resources based on


permissions level and policies.

Encryption Algorithm

The cryptographic cipher combined with various methods for encrypting the
text.

Cryptography

The encrypting (scrambling) of data into a secret code. Cryptography is used to


conceal messages transmitted over public networks such as the Internet. It is
used to encrypt storage drives and messages so that only authorized users have
access

41
Cryptography is a major driver behind Bitcoin and blockchains, which hide the
coin owner's identity in an encrypted address.

From Plaintext to Ciphertext

A text message in its original form is called "plaintext." Using an encryption


algorithm, the plaintext is turned into "ciphertext," which is indecipherable.

Keys Are the Key

The encryption algorithm uses a "key," which is a binary number that is


typically from 40 to 256 bits in length. The greater the number of bits in the key
(cipher strength), the more possible key combinations and the longer it would
take to break the code. The data are encrypted, or "locked," by combining the
bits in the key mathematically with the data bits. At the receiving end, the key is
used to "unlock" the code and restore the original data.

BitLocker

A utility in Windows, starting with Vista, that encrypts the entire contents of the
storage drive (hard disk or SSD). If the computer's motherboard has a Trusted
Platform Module (TPM) chip, the operation is entirely transparent to the user.

Non-TPM Operation

If the motherboard does not have a TPM chip, BitLocker can be used in two
ways. In User Authentication Mode, a PIN or password must be entered when
the computer is turned on. In USB Key Mode, either a USB drive or a smart
card with a USB interface is inserted at startup.

BitLocker vs. Encrypting File System

Two encryption systems come with Windows. BitLocker encrypts the entire
storage drive, whereas Encrypting File System (EFS) is used to encrypt specific
files.

ScramDisk

A windows program that created encrypted volumes on the hard disk.


ScramDisk supported several cryptographic algorithms and allowed data to be
concealed in an existing WAV audio file.

ScramDisk for Linux and DriveCrypt

42
No longer supported, ScramDisk for Linux (SD4L) was also created. An
enhanced, commercial version called DriveCrypt is available from SecurStar
GmbH .

Waggle Mouse to Select CipherThis dialog was used to select the encryption
algorithm (for a brief explantion of each, Waggle means "shake," and the more
the mouse was shaken, the more randomness was introduced into the key
creation.
Digital signature

A digital signature authenticates the sender of a message and provides the


electronic equivalent of a tamper-proof seal that is broken if any data in the
message were altered. Digital signatures use the public key encryption system
for the following purposes.

Signed Certificates

Signed certificates authenticate a website and establish an encrypted connection


for credit cards and confidential data (see digital certificate and TLS).

Signed Executables

Code signing verifies the integrity of executables downloaded from the Internet
(see code signing).

Signed Cryptocurrency Transactions

Bitcoin and other blockchain networks use digital signatures to verify the
integrity of their transactions.

Signatures Are Encrypted DigestsThe digest is a digital fingerprint of the data


that is encrypted ("signed") with the private key of the sender's public/private
key pair. To prove the file was not altered, the recipient decrypts the signature
with the sender's public key, recomputes a new digest from the data and
compares them. If they match, nothing was altered (see below).

Transmitted in the ClearIn this example, the message is tamperproof but can
be read

43
-by an eavesdropper.
Transmitted in SecretIn this example, the message is both tamperproof and
transmitted in secret.

Crypto Addresses

The identification of a sender or receiver of cryptocurrency on a blockchain


network. Crypto addresses use the public key cryptographic method, which
comprises a private-public key pair. The public key is derived from the private
key, both of which are a binary number that is presented as a series of
alphanumeric characters.

The private key is used to withdraw digital coins and must be backed up and
kept secret. The public key is used to receive coins and can be freely shared in a
manner similar to a bank account number for a wire transfer.

Not Entirely Anonymous

Every transaction that a person makes on the blockchain, no matter how long
ago, can be viewed by anyone via the public address. However, with sufficient
effort by a hacker or government agency, a public address can eventually be
matched with the name of a person or entity.

It Can Be More User Friendly

There are several systems that translate human readable addresses to public
crypto addresses

44
Bitcoin Public Address: A public address can be freely published to receive
bitcoins. For quick scanning, QR codes of public addresses are commonly
available in wallets and on exchanges. To generate a Bitcoin public key, there
are several steps; for details,

Steganographic

Hiding a message within an image, audio or video file. Used as an alternate to


encryption, it takes advantage of unused bits within the file structure or bits that
are mostly undetectable if altered. A steganographic message rides secretly to its
destination, unlike encrypted messages, which although undecipherable without
the decryption key, can be identified as encrypted.

Social Steganography

Hiding messages that are published online. The topic was popularized in an
article by Microsoft researcher Danah Boyd in 2010. She cited an example of a
young girl posting lyrics to Monty Python's "Always Look on the Brighter Side
of Life" to keep her mother from knowing she broke up with her boyfriend and
getting overly involved. In the Monty Python movie, people were about to be
killed when the song played, and the girl's friends, hip to the movie, contacted
her independently.

What is access control?


Access control is a security technique that regulates who or what can view or
use resources in a computing environment. It is a fundamental concept in
security that minimizes risk to the business or organization.

There are two types of access control: physical and logical. Physical access
control limits access to campuses, buildings, rooms and physical IT assets.
Logical access control limits connections to computer networks, system files
and data.

To secure a facility, organizations use electronic access control systems that rely
on user credentials, access card readers, auditing and reports to track employee
access to restricted business locations and proprietary areas, such as data centers.
Some of these systems incorporate access control panels to restrict entry to
rooms and buildings, as well as alarms and lockdown capabilities, to prevent
unauthorized access or operations.

45
Logical access control systems perform
identification authentication and authorization of users and entities by
evaluating required login credentials that can include passwords, personal
identification numbers, biometric scans, security tokens or other authentication
factors. Multifactor authentication (MFA), which requires two or more
authentication factors, is often an important part of a layered defense to protect
access control systems.

Why is access control important?


The goal of access control is to minimize the security risk of unauthorized
access to physical and logical systems. Access control is a fundamental
component of security compliance programs that ensures security technology
and access control policies are in place to protect confidential information, such
as customer data. Most organizations have infrastructure and procedures that
limit access to networks, computer systems, applications, files and sensitive
data, such as personally identifiable information and intellectual property.

Access control systems are complex and can be challenging to manage in


dynamic IT environments that involve on-premises systems and cloud services.
After high-profile breaches, technology vendors have shifted away from single
sign-on systems to unified access management, which offers access controls for
on-premises and cloud environments.

How access control works?


Access controls identify an individual or entity, verify the person or application
is who or what it claims to be, and authorizes the access level and set of actions
associated with the username or IP address. Directory services and protocols,
including Lightweight Directory Access Protocol and Security Assertion
Markup Language, provide access controls for authenticating and authorizing
users and entities and enabling them to connect to computer resources, such as
distributed applications and web servers.

Organizations use different access control models depending on their


compliance requirements and the security levels of IT they are trying to protect.

46
Types of access control
The main models of access control are the following:

 Mandatory access control (MAC). This is a security model in which


access rights are regulated by a central authority based on multiple
levels of security. Often used in government and military
environments, classifications are assigned to system resources and the
operating system or security kernel. MAC grants or denies access to
resource objects based on the information security clearance of the
user or device. For example, Security-Enhanced Linux is an
implementation of MAC on Linux.
 Discretionary access control (DAC). This is an access control
method in which owners or administrators of the protected system,
data or resource set the policies defining who or what is authorized to
access the resource. Many of these systems enable administrators to
limit the propagation of access rights. A common criticism of DAC
systems is a lack of centralized control.
 Role-based access control (RBAC). This is a widely used access
control mechanism that restricts access to computer resources based
on individuals or groups with defined business functions -- e.g.,
executive level, engineer level 1, etc. -- rather than the identities of
individual users. The role-based security model relies on a complex
structure of role assignments, role authorizations and role permissions
developed using role engineering to regulate employee access to
systems. RBAC systems can be used to enforce MAC and DAC
frameworks.
 Rule-based access control. This is a security model in which the
system administrator defines the rules that govern access to resource
objects. These rules are often based on conditions, such as time of day
or location. It is not uncommon to use some form of both rule-based
access control and RBAC to enforce access policies and procedures.

47
 Attribute-based access control. This is a methodology that manages
access rights by evaluating a set of rules, policies and relationships
using the attributes of users, systems and environmental conditions.
Implementing access control
Access control is integrated into an organization's IT environment. It can
involve identity management and access management systems. These systems
provide access control software, a user database and management tools for
access control policies, auditing and enforcement.

When a user is added to an access management system, system administrators


use an automated provisioning system to set up permissions based on access
control frameworks, job responsibilities and workflows.

The best practice of least privilege restricts access to only resources that
employees require to perform their immediate job functions.

Challenges of access control


Many of the challenges of access control stem from the highly distributed nature
of modern IT. It is difficult to keep track of constantly evolving assets because
they are spread out both physically and logically. Specific examples of
challenges include the following:

 dynamically managing distributed IT environments;


 password fatigue;
 compliance visibility through consistent reporting;
 centralizing user directories and avoiding application-specific silos;
and
 data governance and visibility through consistent reporting.

Many traditional access control strategies -- which worked well in static


environments where a company's computing assets were help on premises -- are
ineffective in today's dispersed IT environments. Modern IT environments
consist of multiple cloud-based and hybrid implementations, which spreads

48
assets out over physical locations and over a variety of unique devices, and
require dynamic access control strategies.

Organizations often struggle to understand the difference between


authentication and authorization. Authentication is the process of verifying
individuals are who they say they are using biometric identification and MFA.
The distributed nature of assets gives organizations many avenues for
authenticating an individual.

Authorization is the act of giving individuals the correct data access based on
their authenticated identity. One example of where authorization often falls
short is if an individual leaves a job but still has access to that company's assets.
This creates security holes because the asset the individual used for work -- a
smartphone with company software on it, for example -- is still connected to the
company's internal infrastructure but is no longer monitored because the
individual is no longer with the company. Left unchecked, this can cause major
security problems for an organization. If the ex-employee's device were to be
hacked, for example, the attacker could gain access to sensitive company data,
change passwords or sell the employee's credentials or the company's data.

One solution to this problem is strict monitoring and reporting on who has
access to protected resources so, when a change occurs, it can be immediately
identified and access control lists and permissions can be updated to reflect the
change.

Another often overlooked challenge of access control is user experience. If an


access management technology is difficult to use, employees may use it
incorrectly or circumvent it entirely, creating security holes and compliance
gaps. If a reporting or monitoring application is difficult to use, the reporting
may be compromised due to an employee mistake, which would result in a
security gap because an important permissions change or security vulnerability
went unreported.

49
Access Control Software
Many types of access control software and technology exist, and multiple
components are often used together as part of a larger identity and access
management (IAM) strategy. Software tools may be deployed on premises, in
the cloud or both. They may focus primarily on a company's internal access
management or outwardly on access management for customers. Types of
access management software tools include the following:

 reporting and monitoring applications


 password management tools
 provisioning tools
 identity repositories
 security policy enforcement tools

Microsoft Active Directory is one example of software that includes most of the
tools listed above in a single offering. Other IAM vendors with popular
products include IBM, Idaptive and Okta.

Physical Aspects: Biometrics

Physical Biometrics Methods. Static methods are based on the physiological


characteristics of a person present throughout their life. These features include
face and hand geometry, iris, vein patterns, and other features. In the world
market for biometric security, static methods are mainly represented.

Biometrics is a system for recognizing people based on one or several


physiological or behavioural traits. This kind of data is used to identify the user
and assign an access ID. Biometric access control systems are convenient for
users because information carriers are always with them and cannot be lost or
fabricated. They are considered to be reliable systems because even in case of a
data breach, biometric credentials like fingerprints or a face scan are almost
impossible to use for unsanctioned access, unlike username and password
combinations.

In general, biometric identification systems are divided according to the


operation principle into two main types: physical and behavioural.

50
What are the main advantages and disadvantages of biometrics? Which
biometric systems are the most accurate? What is the difference between
behavioural systems and static ones? Let's try to understand the principles of
work and areas of application of biometrics.

What Are Biometrics?

The field of biometrics encompasses knowledge that represents methods for


measuring a person's personal physical and behavioural characteristics and how
to use them for identification or authentication purposes.

Biometrics use science-based means to describe and measure the characteristics


of the body of living beings. As applied to automatic identification systems, the
“biometric” term means those systems and methods are based on the use of any
human body's unique features for identification or authentication. Our life is
filled with situations where we need to prove who we are.

It is easy to list a wide range of industries that require fast, reliable, and
convenient user authentication: access to a personal computer or smartphone,
access to email, banking transactions, opening doors and starting your car's
engine, controlling access to premises, crossing state borders, and any
interaction with government authorities that requires identification. Thus, faster
and more secure authentication mechanisms are essential for preventing fraud
and crime.

Biometric identification is often called pure or real authentication since it relies


on a personal feature rather than a virtual key or password.
A specific feature of biometric identification will be the large size of the
biometric database: each of the biometric samples must be compared with all
available records in the database. For use in real life, such a system requires a
high speed of matching biometric features.

There are verification systems on the other end of the spectrum; as a rule, they
make only one comparison in a 1: 1 mode. That is, the presented biometric
feature is compared with one biometric part from the database. Therefore, the
system answers the question of whether you are who you claim to be.
In biometrics, two authentication methods are used:

 Verification: measured parameters are compared with one record,


suggested by some external identifier(a username, password, or other sort
of ID) from the registered users' database.
 Identification: measured parameters are compared with all records from
the database of registered users, not with one of them selected based on
some identifier.

51
Two Main Types of Biometrics

Biometric recognition is the individual's presentation of his unique biometric


parameter and the process of comparing it with the entire database of available
data. Biometric readers are used to retrieving this kind of personal data.

Physical identification methods are based on the analysis of the invariable


physiological characteristics of a person.
These characteristics include:

 Face shape and geometry (technologies for recognizing two-


dimensional images of faces drawn from photographs and video
sequences work with these identifiers). Thanks to the growth of
multimedia technologies, you can see more and more video cameras
installed on city streets and squares, airports, train stations, and other
crowded places, determining this direction's development.
 Fingerprints (the most widespread, convenient, and effective biometric
technology is built on the use of these identifiers). The advantages of
fingerprint access are ease of use, convenience, and reliability. Although
the false identification rate is about 3%, the unauthorized access
probability is less than 0.00001% (1 in 1,000,000).
 Physical Aspects: Biometrics:The iris of the eye (patent restrictions
constrain the spread of the technology in which this identifier is used).
The advantage of iris scanners is that they do not require the user to focus
on the target because the iris pattern is on the eye's surface. The eye's
video image can be scanned at a distance of less than 1m.
 Palm, hand, or finger geometry (used in several narrow market
segments)
 Facial thermography, hand thermography (technologies based on the
use of these identifiers have not become widespread)
 Drawing of veins on the palm or finger (the corresponding technology
is becoming popular, but due to the high cost of scanners, it is not yet
widely used)
 DNA (mainly in the field of specialized expertise)

behavioural identification methods are based on the analysis of a person's


behavioural characteristics — the characteristics inherent in each person in the
process of reproducing an action.
behavioural methods of user identification are divided by:

 Signature recognition (for identification, simply the degree of


coincidence of the two pictures is used). By painting and dynamic
characteristics of writing (for identification, a convolution is built, which
includes information on painting, time characteristics of painting
52
application, and statistical characteristics of the dynamics of pressure on
the surface).
 Keystroke dynamics. The method is generally similar to that described
above. Instead of a signature, a certain codeword is used (when the user's
password is used for this, such authentication is called two-factor). The
dynamics of the code word's set is the main characteristic used to
construct the convolution for identification.
 Speaker recognition. It is one of the oldest biometric technologies. Its
development has intensified, a great future, and widespread use in
constructing «intelligent buildings» are predicted. There are many ways
to construct a voice identification code; as a rule, these are various
combinations of the voice's frequency and statistical characteristics.
 Gait recognition. This should be categorized as exotic. It seems that this
direction is a dead-end due to the poor repeatability of the feature and its
weak identification.

Physical Biometrics

The main goal of biometrics is to create a registration system that would very
rarely deny access to legitimate users and, at the same time, completely exclude
unauthorized intruders... Compared to passwords and cards, such a system

53
provides much more reliable protection: after all, your own body can neither be
stolen nor lost.

Physical biometrics analyze data such as facial features, eye structure (retina or
iris), finger parameters (papillary lines, relief, length of joints, etc.), palm (print
or topography), hand shape, vein pattern on the wrist, and heat pattern.

Physical biometrics have become widespread; for example, access control to


smartphones and laptops. At some major airports, the iris is scanned to ensure
security. The data is stored in an international database, and the next time you
go through the control, you do not have to queue with your passport. Just go
through the eye scan procedure.

It is especially important for banks that it is impossible to refuse actions


confirmed by biometric identifiers' presentation. Unlike cards, fingerprint
recognition have no restrictions on the number of «readings» in biometric
systems. It is possible to register reserve identifiers, and here fingerprints are
again in the lead: an ordinary person has ten fingers on his hands and only one
face, two eyes each, and two hands.

Healthcare facilities are switching to biometric technologies to keep their


patients safe, speed up care, and reduce errors. Biometric technologies are
actively integrated into electronic medical record systems in order to protect
patients' data.

It is important to note that all biometric means of authentication in one form or


another use the statistical properties of some of the qualities of an individual.
This means that their application results are probabilistic and will change from
time to time. Besides, all such tools are not immune to authentication errors.

Pros & Cons of Physical Biometrics

Pros:

 An identifier is inseparable from a person; it cannot be forgotten, lost, or


passed on. Having checked the identifier, we can say with a high degree
of certainty that this particular person was identified.
 It is quite difficult to recreate (fake) an identifier.
 The process of biometric identification is fast and completely performed
by computers.
 Identification can be carried out transparently (invisibly) for a person.

54
Cons:

 The need for certain environmental conditions for biometric identification.


 Situations can arise where biometric identifiers are damaged or
unavailable for reading.
 For many biometric identification systems, biometric scanners are quite
expensive.
 It is necessary to comply with the requirements of regulators for the
protection of personal biometric data.

behavioural Biometrics

behavioural biometrics involves the collection of a wide variety of data. For


example, a smartphone that collects information about behaviour can obtain
multiple measurement points to assess the likelihood of fraudulent activity,
while static biometrics provide less raw data. The combination of behavioural
characteristics in various mathematical algorithms makes it possible to obtain a
more multifaceted user profile that allows you to weed out fraudsters.

behavioural biometrics are also called passive because users do not need to take
any additional steps when operating. They don't need to put their finger on a
dedicated button or speak into a microphone. They just behave as usual.
behavioural biometrics can also detect fraud early, even before the attacker's act
(for example, stealing from stores or making a purchase).

behavioural biometrics can be adapted for various devices, including entire


smartphone operating systems, not just specific applications that use the
technology. This means that you can protect your entire phone. Each person has
unique features of interaction with their digital devices: the speed with which
they type on the keyboard, the force of pressing the keys, or the angle at which
they move their fingers across the screen. These behaviours are nearly
impossible to replicate by another person.

Today, the industrial application of behavioural biometrics is not yet


widespread. Experts suggest using the technology in cases where additional
authentication levels are needed — for example, when conducting large
transactions or access to highly sensitive data.

Now, behavioural biometrics are most often used by banks and financial
institutions. Experts also see the potential for technology applications in e-
commerce, online services, healthcare, government, and consumer electronics.

55
Pros & Cons of behavioural Biometrics

Pros:

 Individual user set of analyzed behavioural characteristics.


 No custom script change is required to perform identification: seamless
integration method.
 Improves recognition accuracy in multifactor identification systems.

Cons:

 Inaccuracies in identification may arise because the user's behaviour is


not always constant since they can behave differently in various situations
due to fatigue, drunkenness, feeling unwell or trivial haste.
 behavioural biometrics are not yet widely adopted.
 Requires lots of personal data to determine a user's standard behaviour.

Difference Between Using Physical and behavioural Biometrics

The use of a person’s physiological characteristics as a means of identification


has become widespread. However, after certain incidents, physiological
biometrics fell into the background — behavioural biometrics offered a more
reliable and safer alternative.

Physiological biometrics replaced passwords with personal identifiers —


fingerprints, facial features, iris, ear shape, or palm vein patterns. However, it
turned out that such identification systems are not always easy to use and not as
secure as expected. For example, scanning the iris of an eye is ineffective in
sunlight. And unlocking your phone with your fingerprint while exercising at
the gym or working in the garden won't be so easy.
Consumers also want more control over their personal information. The
problem with physiological biometrics is that it requires users to share their
private information to assure that it will be safely stored and not shared with
third parties.

Given the above problems, it is worth considering a multi-layered approach to


identification using both physiological and behavioural biometrics. This will
create a secure and user-friendly authentication method. Advances in artificial
intelligence and machine learning have opened up a new identification method
that analyzes how users interact with a device they use as an authentication tool.

User behaviour can be applied to identify someone, and it does require storing
large amounts of data. Stored data is used to develop a median behaviour for a
person, so it will increase identification accuracy when the user is tired, drunk,

56
hasty, or in other states.
After creating a normal behaviour portrait, all the redundant data can be
removed, but most of it remains for identification purposes. Verification can be
done only with one set of data (instead of using a database of million examples),
but still, a lot of information will be collected and stored in the process.

By combining unique identifying markers — physiological and behavioural —


companies will create a reliable, multi-level authorization process for access
control systems. Depending on how critical the information is to secure and
how accurate the recognition system should be, both physical and behavioural
recognition can complete each other.

Physical Biometrics Methods

Static methods are based on the physiological characteristics of a person present


throughout their life. These features include face and hand geometry, iris, vein
patterns, and other features. In the world market for biometric security, static
methods are mainly represented. Dynamic authentication and combined
information security systems occupy only 20% of the market. However, in
recent years, there has been active development of dynamic protection methods.

DNA Matching

Description: Biometric identification of a person using a person's DNA code is


the most accurate method. It is based on the unique sequence of the human
deoxyribonucleic acid chain. The process begins with preparing a control DNA
sample (buccal smear, blood, saliva, other body secretions, or tissues). The
sample is analyzed, a DNA profile is created, which is compared with another
sample, and their identity is determined.

Category: Chemical
Industry Leaders: Innocence Project, 23andMe, Family Tree, Ancestry
Use-Cases: Forensic science, calculating family ties between people and
determining their predisposition to various diseases based on their DNA
samples
Security Level: Very High
Integration Costs:

57
Pros:

 DNA is the only biometric technology that allows you to identify


relatives using an unidentified DNA sample
 Like fingerprints, DNA is one of the few biometric characteristics of a
person that criminals leave behind at a crime scene
 DNA testing is a relatively mature and dynamic technology that is widely
used and familiar to the public
 Rapid DNA identification devices make sequencing possible in just 90
minutes
 Many DNA analysis results can be easily stored in databases, allowing
data to be accumulated and quickly searched by automated means

Cons:

 Low representation in the biometric market


 Identical twins will share the same DNA

Accuracy Level: Very High

Ear Acoustic Authentication

Description: Unlike many other biometric methods, which require unique


cameras, these biometric systems measure ear acoustics using special
headphones and inaudible sound waves. The microphone inside each earpiece
measures how sound waves bounce off the auricle and travel in different
directions depending on the ear canal's curves. A digital copy of the ear shape is
converted into a biometric template for later use.

Category: Auditory
Industry Leaders: NEC Corporation, Yahoo Labs
Use-Cases: Smartphone authentication, protection of phone calls, personal
voice messages, wireless radios, and other audio information
Security Level: Very High
Integration Costs:

Pros:

 High speed and accuracy of recognition


 Possibility of wide distribution via smartphones

Cons:

 Low representation in the biometric market

58
 Technology is under development

Accuracy Level: Very High

Eye Vein Recognition

Description: Scleral blood veins have recently become an option in the


recognition system. The sclera is the white, opaque, outer protective part of the
eye where the irregularly spaced blood veins are visible. The advantage of the
sclera is that it can be captured using a visible wavelength camera. Scleral
specimens are stored not as raw images but as an encrypted template containing
about 100 measurements that contribute significantly to the biometric matching
process.

Category: Visual
Industry Leaders: TechNavio, EyeLock, EyeVerify
Use-Cases: Mobile phones, online banking apps authentication
Security Level: Very High

Pros:

 High accuracy of recognition


 Changes in the eyeball have little effect on the sclera over time
 Possibility of wide distribution via smartphones

Cons:

 Currently, the technology is under development


 Low distribution

Accuracy Level: Very High

Facial Recognition

Description: In the process of biometric identification according to the shape of


the face, a 3D or 2D image of the face is built using a high-resolution video
camera). The contours of the eyebrows, eyes, nose, lips, chin, ears, and the
distance between them are determined, and there are several options for
changing the image depending on the rotation of the face, tilt, changes in facial
expressions, etc. A digital photo of a face is stored in the database and used to
compare individual faces' images. You can learn more about Facial Recognition
here.

59
Category: Visual & Spatial
Industry Leaders:
20 major vendors, namely, NEC (Japan), Aware (US), Gemalto (Netherlands),
Ayonix (Japan), Idemia (France), Cognitec (Germany), nVviso SA
(Switzerland), Daon (US), Stereovision Imaging (US), Techno Brain (Kenya),
Neurotechnology (Lithuania), Innovatrics (Slovakia), id3 Technologies (France),
Herta Security (Spain), Animetrics (US), Megvii {Face++} (China), FaceFirst
(US), Sightcorp (Netherlands), FacePhi (Spain), and SmilePass (UK).
Use-Cases: Controlling access to objects or systems, identification for video
management systems, determining the profile of the customer, identification in
the banking sector, time attendance systems, biometric authentication, payment
for services
Security Level: High

Pros:

 High accuracy of recognition


 Wide range of parameters (the algorithm is not affected by age
differences, lighting, head position, etc.)
 Performance (result in a split second in a multi-billion database)
 Scalable architecture (nationwide search)
 Mobility (results in the field)

Cons:

 Data storage difficulties. Identification efficiency develops with the


number of identified faces in available databases, which is still very far
away from world-wide or even country-wide numbers
 The efficiency can decrease due to the low quality of camera resolution
and lighting issues
 Identify forgery. It is still very hard to trick the system into believing that
you are someone else, but there are relatively easy methods to hide your
identity, like using special makeup

Accuracy Level: High

Finger Vein Recognition

Description: This technology uses pattern recognition techniques based on


images of samples of human veins located in the subcutaneous part of the finger.
Finger vein recognition is one of the many forms of biometrics used for
identification and verification. Vein templates cannot be faked because they are
located under the skin's surface. The principle of finger vein recognition is to
use infrared rays to capture samples of finger veins and match them to templates.

60
Category: Visual & Spatial
Industry Leaders: Fujitsu, Hitachi, NEC Corporation, Safran, Agnitio
Use-Cases: Identification in the banking sector and in medical institutions
Security Level: Very High

Pros:

 High reliability
 Almost impossible to counterfeit
 Contactless scanning
 Convenience of integration
 Affordable price

Cons:

 Sensitivity to halogen light or direct sunlight


 Some forms of anemia can interfere with the operation of the reader

Accuracy Level: Very High

Fingerprint Recognition

Description: This is the most widespread technology used in biometric


recognition systems today. The technology is based on the uniqueness of the
papillary patterns on people's fingers. The fingerprint obtained with the scanner
is converted into a digital code stored in a database and then compared with
previously entered and converted “fingerprint codes”. Fingerprint identification
technologies have incorporated all the best features that are inherent in
biometrics in general.

The fingerprint identifies a specific person, not a token or card; unlike a


password, a fingerprint cannot be forgotten, voluntarily, or involuntarily passed
on to another. Modern scanners can establish a fingerprint belonging to a living
person, and they cannot be deceived by presenting a print of a fingerprint on
paper, gelatin, or glass.

Category: Visual & Spatial


Industry Leaders: NEC Corporation, Idemia, ID R&D, Apple Inc, Samsung,
IBM
Use-Cases: Device authentication (smartphone, laptop, flash drives, etc.),
identification of criminals, identification in the banking sector
Security Level: High

61
Pros:

 Easy to use
 Convenience and reliability
 Low cost of devices that scan a fingerprint image

Cons:

 Inability to read the print with some scanners with excessively dry skin
 Violation of the papillary pattern by small scratches, cuts, chemical
reagents can affect recognition

Accuracy Level: High

Footprint and Foot Dynamics

Description: A footprint can identify a person and calculate parameters such as


their foot length, foot category, height, weight, and BMI. The foot movement
features of 104 volunteers were studied using 3D image processing and image
extraction techniques. With an accuracy of 99.6%, we determined who owns
which trace. All 104 footprints were found to be unique. According to the study,
there is a strong correlation between actual height and toes, actual height and
foot length, height, and weight.

Category: Visual & Spatial


Industry Leaders: UMANICK, Institute of Electrical and Electronics
Engineers, ID R&D
Use-Cases: Forensic and medical purposes
Security Level: High

Pros:

 High accuracy
 Almost impossible to counterfeit
 Contactless scanning

Cons:

 Currently under development

Accuracy Level: High

62
Hand Geometry

Description: Determination of hand geometry refers to measuring


characteristics such as the length and width of fingers, curvature, and relative
position. This method is outdated and rarely used, although it was once the
dominant variant of biometric identification. Modern advances in fingerprint
and face recognition software have overshadowed its relevance.

Category: Visual & Spatial


Industry Leaders: 3M Company, Fulcrum Biometrics LLC., Safran SA,
Fujitsu Ltd
Use-Cases: Government support systems for the citizen identification process,
smartphone applications, surveillance and security systems for larger gatherings
Security Level: Low

Pros:

 Fast, simple, accurate, and easy to use


 Can be integrated with other recognition systems
 High level of public acceptance

Cons:

 The need for contact with the scanning device


 Changes in the hand's geometry as a result of injuries, aging, and weight
gain can affect the recognition accuracy

Accuracy Level: High

Iris Recognition

Description: With this form of biometric identification, the pattern of the iris is
scanned by photographing the face with a high-resolution camera. The iris,
which is unique, is highlighted and converted into a digital code. Since the iris'
appearance of age spots or discoloration is possible, a black and white image is
used.

Category: Visual
Industry Leaders: EyeLock, Apple, Samsung, Fujitsu Ltd
Use-Cases: Integration in the access control system, identifying persons in
special areas (airports, border control areas, passport offices)
Security Level: High

63
Pros:

 Fast scanning
 Contactless
 Safe for users
 Recognition does not depend on glasses or contact lenses
 Impossibility of counterfeiting

Cons:

 Minor eye injuries can affect recognition


 Deterioration in identification after taking alcohol or LSD
 High integration cost

Accuracy Level: High

Body Odor Recognition

Description: Body odor characteristics are so unique to each person that they
can be used for biometric authentication. This conclusion was reached in 2017
by a group of scientists consisting of Juliana Agudelo, Vladimir Privman, and
Jan Halamek. Their idea was very simple: the composition of amino acids in
sweat is unique for each person. If you design a smartphone to determine this
composition and distinguish it from others, the user can be authenticated using
their sweat. Unlike other biometric authentication methods, it is not easy to
counterfeit the unique chemical makeup of sweat. According to scientists, body
odor recognition can be used in practice in the next 5-10 years.

Category: Olfactory
Industry Leaders: IIia Sistemas SL, Universidad Politécnica de Madrid
Use-Cases: Unlocking smart devices to protecting data inside applications can
be used even by people with disabilities, unable to remember the password or
unable to control their limbs
Security Level: High

Pros:

 The technology is promised to be impossible to counterfeit


 Сan be used by people with disabilities

Cons:

 Currently under development

64
Accuracy Level: High

Palm Print Recognition

Description: This is a method based on the recognition of unique patterns of


various characteristics on the palms of the hands. Palm prints are recognized
using a specially configured camera or a particular device that processes image
data from a photograph of a palm and then compares this record with a database.
Palm prints include data used for fingerprint recognition. Palm scanners use
optical, thermal, or tactile techniques to reveal the details of raised areas and
branches in a person's palm, as well as scars, folds, and skin texture.

Category: Visual & Spatial


Industry Leaders: NEC Corporation, MegaMatcher, ZKTeco, DERMALOG
Use-Cases: Device authentication (smartphone, laptop, flash drives, etc.),
identification of criminals, identification in the banking sector
Security Level: High

Pros:

 It is possible to capture more distinctive features than fingerprints


 Fast scanning
 Safe for users
 May be contactless

Cons:

 Palm print scanners tend to be bulkier and more expensive than


fingerprint scanners

Accuracy Level: High

Palm Vein Recognition

Description: This type is an improved version of palmprint recognition. It is


much more challenging to crack the algorithm than with other biometric scans
since the veins are located deep under the skin. Infrared rays pass through the
skin surface where the venous blood absorbs them. A special camera captures
the image, digitizes the data, and then stores it or uses it to confirm identity.

Category: Visual & Spatial


Industry Leaders: Fiserv, M2SYS, BioSmart
Use-Cases: Border control, identification in the banking sector, controlling

65
access to objects or systems.
Security Level: Very High

Pros:

 Excellent security
 Contactless scanning
 Convenience of integration
 Affordable price

Cons:

 The large size of scanners

Accuracy Level: Very High

Retinal Scan

Description: The retinal scan allows capillaries deep inside the eye to be
scanned using near-infrared cameras. The resulting image is first preprocessed
to improve its quality. It is then converted into a biometric template for
registration of a new user and subsequent verification with the template during
attempts to recognize the user. The high cost and the need to place the eye close
to the camera hinder such scanners' wider use.

Category: Visual & Spatial


Industry Leaders: EyeLock, CMITech, BioEnable, FotoNation, IDEMIA
Use-Cases: Identification in the banking sector, controlling access to objects or
systems
Security Level: Very High

Pros:

 Almost impossible to counterfeit


 High accuracy
 Fast recognition time

Cons:

 Negative impact on eye disease (cataract or glaucoma)


 Low level of public acceptance
 High false rejection rate

Accuracy Level: Very High

66
Skin Reflection

Description: The method is based on the technology of exposure of a skin area


to the light of different wavelengths (visible and near-IR spectral regions).
Partially reflected light from the skin is analyzed for each wavelength. This
method's effectiveness is especially high in temperate climates since removing
gloves can slow down the identification process. Nevertheless, this technology
is ideal for ensuring the separation of security levels in identification systems on
two independent grounds.

Category: Visual
Industry Leaders: Apple Inc, Trinamix, Qualcomm
Use-Cases: General identity verification, observation, human-computer
interaction
Security Level: Low

Pros:

 Fast recognition time


 Contactless scanning
 The convenience of integration

Cons:

 Skin reflection is not unique; it is not very reliable

Accuracy Level: Low

Thermography Recognition

Description: This is a method of representing infrared energy in the form of a


temperature distribution image. Facial biometric thermography captures thermal
patterns caused by the movement of blood under the skin. Because each
person's blood vessels are unique, the corresponding thermograms are also
unique even among identical twins, making this biometric verification method
even more accurate than traditional facial recognition.

Category: Visual
Industry Leaders: Estone Technology, TAMRON Europe GmbH, Axis
Communications (UK Ltd), FLIR Commercial Systems, LYNRED
Use-Cases: Used for recognition in airports, public transit hubs, offices, retail
businesses, health facilities, and on public streets
Security Level: High

67
Pros:

 Possibility to cover a large area


 Fast recognition time
 Recognition in areas with poor lighting

Cons:

 Emotional and physical stress can affect skin temperature


 Thermal cameras are expensive
 The majority of the thermal images have low resolution

Accuracy Level: High

behavioural Biometrics Metrics

behavioural authentication methods are based on a person's behavioural


characteristics. They evaluate the unique behaviour and subconscious
movements of a person in the process of reproducing any actions.
Simultaneously, suspicious behaviour indicates various systems can recognize a
deviation from the norm as fraudulent.

behavioural biometrics can be adapted for a wide variety of devices, including


operating systems in smartphones. The technology can be used not only in
individual applications, protecting the entire device.

Keystroke Dynamics

Description: Keystroke dynamics take standard passwords to the next level,


tracking their input's rhythm. Such sensors can respond to the time it takes to
press each key, the delays between keys, the number of characters entered per
minute, and so on. Keystroke patterns work in conjunction with passwords and
PINs to enhance security.

Category: behavioural
Industry Leaders: TypingDNA, ID Control, BehavioSec
Use-Cases: Device user identification, part of multifactor authentication, used
for observation
Security Level: High

Pros:

 No special equipment is required for this method


 Fast and secure

68
 Hard to copy by observation

Cons:

 Typing rhythm can change because of fatigue, illness, the effects of drugs
or alcohol, keyboard changes, etc.
 Can't identify the same person using different keyboard layouts

Accuracy Level: High

Signature Recognition

Description: The method consists of a pen and a special tablet connected to a


computer to compare and check patterns. A high-quality tablet can capture
behavioural characteristics such as speed, pressure, and signing time. During the
registration phase, the person must write on the tablet several times to collect
data. The signature recognition algorithms then extract unique features such as
time, pressure, speed, blows direction, essential points along the signature path,
and signature size. The algorithm assigns different weights to these points.

Category: behavioural
Industry Leaders: Aerial, Redrock Biometrics, Sense, University of Oxford,
Mobbeel
Use-Cases: Document verification and authorization, identification in the
banking sector
Security Level: High

Pros:

 Almost impossible to counterfeit


 Widespread in business practice
 Fast and secure
 Convenience of integration

Cons:

 High recognition error rate until user get used to signing pad
 Hand injuries can affect recognition accuracy

Accuracy Level: Medium

Speaker Recognition

Description: For this method, the user needs to speak a word or phrase into the
microphone. This is necessary to acquire a sample of a person's speech. The
69
microphone's electrical signal is converted to a digital signal using an analog-to-
digital converter (ADC). It is recorded in the computer memory as a digitized
sample. The computer then compares and tries to match the person's input voice
with the stored digitized voice sample and identifies the person. Speaker
recognition focuses on the context of the spoken phrase by the user, as opposed
to voice recognition.

Category: behavioural, Auditory


Industry Leaders: Apple Inc, Microsoft, Google LLC
Use-Cases: Telephone and internet transactions, sound signatures for digital
documents, online education systems, emergency services
Security Level: Low

Pros:

 Convenience of integration
 Fast recognition time
 Contactless scanning

Cons:

 Sensitivity to microphone quality and noise


 Risk of counterfeit

Accuracy Level: Low

Voice Recognition

Description: Voice recognition compares a spoken phrase instance with a


digital template. It is used as a means of identification and authentication in
security systems such as access control and time tracking. The system creates
digital templates with a very high probability of correct interpretation. Every
person's voice includes physiological and behavioural characteristics.

Physiological aspects are based on the size and shape of the mouth, throat,
larynx, nasal cavity, each person's body weight, and other factors. behavioural
traits are based on language, educational level, and place of residence, which
can lead to the appearance of specific intonations, accents, and dialects.

Category: behavioural, Auditory


Industry Leaders: Nuance Communications, Google LLC, Amazon.com,
Apple Inc

70
Use-Cases: Online Banking sector, emergency services, call centers recognition,
high demand for voice recognition in healthcare
Security Level: Low

Pros:

 Convenience of integration
 Fast recognition time
 Contactless scanning

Cons:

 Risk of counterfeit
 Inability to suppress external noise
 Recognition accuracy problems

Accuracy Level: Low

Gait Recognition

Description: Gait biometrics capture step patterns using video and then convert
the matched data into a mathematical equation. This type of biometrics is
discreet and unobtrusive, making it ideal for mass crowd surveillance. It is also
an advantage that these systems can quickly identify people from afar.

Category: behavioural
Industry Leaders: SFootBD, Watrix, Cometa Srl
Use-Cases: In the medical and forensic sectors
Security Level: Low

Pros:

 Contactless scanning
 Possibility to cover a large area
 Fast recognition time
 Technology is developing rapidly

Cons:

 Not so reliable as other biometric methods


 Clothes and shoes can affect recognition accuracy

Accuracy Level: Low

71
Lip Motion

Description: Lip motion is one of the newest forms of biometric verification.


Just as a deaf person can track the lips' movement to determine what is being
said, biometric systems record the activity of the muscles around the mouth to
form a pattern of movement. Biometric sensors of this kind often require the
user to repeat a password to determine the appropriate lip movements and then
grant or deny access based on a comparison with the recorded pattern.

Category: behavioural, Visual


Industry Leaders: Hong Kong Baptist University, AimBrain, Liopa
Use-Cases: It can be used to improve the efficiency of security systems and
complement such access methods as face recognition, retinal scanning, and
fingerprinting.
Security Level: High

Pros:

 Contactless scanning
 Fast recognition time
 Improves recognition accuracy when combined with other forms of
biometrics

Cons:

 The technology is currently under development

Accuracy Level: High

72

You might also like