67% found this document useful (3 votes)
3K views117 pages

DIS (CW3551) Notes

Information security
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
67% found this document useful (3 votes)
3K views117 pages

DIS (CW3551) Notes

Information security
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 117

UNIT I

CW3551 - DATA AND INFORMATION SECURITY

UNIT I INTRODUCTION

History, What is Information Security?, Critical Characteristics of Information,


NSTISSC Security Model, Components of an Information System, Securing the
Components, Balancing Security and Access, The SDLC, The Security SDLC

Introduction
 James Anderson, executive consultant at Emagined Security, Inc., believes information
security in an enterprise is a “well-informed sense of assurance that the information risks
and controls are in balance.” He is not alone in his perspective.
 Many information security practitioners recognize that aligning information security
needs with business objectives must be the top priority

THE HISTORY OF INFORMATION SECURITY


 The history of information security begins with computer security. The need for
computer security—that is, the need to secure physical locations, hardware, and software
from threats—arose during World War II when the first mainframes, developed to aid
computations for communication code breaking (see Figure 1-1), were put to use.
 Multiple levels of security were implemented to protect these mainframes and maintain
the integrity of their data. Access to sensitive military locations, for example, was
controlled by means of badges, keys, and the facial recognition of authorized personnel
by security guards.
 During these early years, information security was a straightforward process composed
predominantly of physical security and simple document classification schemes. The
primary threats to security were physical theft of equipment, espionage against the
products of the systems, and sabotage.
 One of the first documented security problems that fell outside these categories occurred
in the early 1960s, when a systems administrator was working on an MOTD

1
UNIT I

WHAT IS DATA INFORMATION SECURITY?


Data security is the process of safeguarding digital information throughout its entire
life cycle to protect it from corruption, theft, or unauthorized access. It covers everything—
hardware, software, storage devices, and user devices; access and administrative controls; and
organizations' policies and procedures.

2
UNIT I

BENEFITS OF DATA SECURITY


1. Keeps your information safe: By adopting a mindset focused on data security and
implementing the right set of tools, you ensure sensitive data does not fall into the wrong
hands. Sensitive data can include customer payment information, hospital records, and
identification information, to name just a few. With a data security program created to meet
the specific needs of your organization, this info stays safe and secure.
2. Helps keep your reputation clean: When people do business with your organization, they
entrust their sensitive information to you, and a data security strategy enables you to provide
the protection they need. Your reward? A stellar reputation among clients, partners, and the
business world in general.
3. Gives you a competitive edge: In many industries, data breaches are commonplace, so if you
can keep data secure, you set yourself apart from the competition, which may be struggling to
do the same.
4. Saves on support and development costs: If you incorporate data security measures early in
the development process, you may not have to spend valuable resources for designing and
deploying patches or fixing coding problems down the road.
CRITICAL CHARACTERISTICS OF INFORMATION
 The value of information comes from the characteristics it possesses. When a
characteristic of Information changes, the value of that information either increases, or,
more commonly, decreases.
 Some characteristics affect information’s value to users more than others do. This can
depend on circumstances; for example, timeliness of information can be a critical factor,
because information loses much or all of its value when it is delivered too late. Though
information security professionals and end users share an understanding of the
characteristics of information, tensions can arise when the need to secure the information
from threats conflicts with the end users’ need for unhindered access to the information.

3
UNIT I

1. Availability: Availability enables authorized users—persons or computer systems—to


access information without interference or obstruction and to receive it in the required
format. Consider, for example, research libraries that require identification before
entrance. Librarians protect the contents of the library so that they are available only to
authorized patrons. The librarian must accept a patron’s identification before that patron
has free access to the book stacks.
 Once authorized patrons have access to the contents of the stacks, they expect to find the
information they need available in a useable format and familiar language, which in this
case typically means bound in a book and written in English.
 Accuracy Information has accuracy when it is free from mistakes or errors and it has the
value that the end user expects. If information has been intentionally or unintentionally
modified, it is no longer accurate. Consider, for example, a checking account. You
assume that the information contained in your checking account is an accurate
representation of your finances. Either way, an inaccurate bank balance could cause you
to make mistakes, such as bouncing a check.
2. Authenticity: Authenticity of information is the quality or state of being genuine or
original, rather than a reproduction or fabrication. Information is authentic when it is in
the same state in which it was created, placed, stored, or transferred.
 Consider for a moment some common assumptions about e-mail. When you receive e-
mail, you assume that a specific individual or group created and transmitted the e-mail—
you assume you know the origin of the e-mail.
 This is not always the case. E-mail spoofing, the act of sending an e-mail message with a
modified field, is a problem for many people today, because often the modified field is
the address of the originator.
 Spoofing the sender’s address can fool e-mail recipients into thinking that messages are
legitimate traffic, thus inducing them to open e-mail they otherwise might not have.
Spoofing can also alter data being transmitted across a network, as in the case of user data
protocol (UDP) packet spoofing, which can enable the attacker to get access to data
stored on computing systems.
 Another variation on spoofing is phishing, when an attacker attempts to obtain personal
or financial information using fraudulent means, most often by posing as another
individual or organization.

4
UNIT I

 When used in a phishing attack, e-mail spoofing lures victims to a Web server that does
not represent the organization it purports to, in an attempt to steal their private data such
as account numbers and passwords.
 The most common variants include posing as a bank or brokerage company, e-commerce
organization, or Internet service provider. Even when authorized, pretexting does not
always lead to a satisfactory outcome.
 In 2006, the CEO of Hewlett-Packard, a corporate director suspected of leaking
confidential information. The resulting firestorm of negative publicity led to Ms. Dunn’s
eventual departure from the company.13
3. Confidentiality Information has confidentiality when it is protected from disclosure or
exposure to unauthorized individuals or systems. Confidentiality ensures that only those
with the rights and privileges to access information are able to do so.
 When unauthorized individuals or systems can view information, confidentiality is
breached.
 To protect the confidentiality of information, you can use a number of measures,
including the following:
 Information classification
 Secure document
 storage Application of general security policies
 Education of information custodians and
 end users Confidentiality
 Individuals who transact with an organization expect that their personal information will
remain confidential, whether the organization is a federal agency, such as the Internal
Revenue Service, or a business.
 Problems arise when companies disclose confidential information. Sometimes this
disclosure is intentional, but there are times when disclosure of confidential information
happens by mistake—for example, when confidential information is mistakenly e-mailed
to someone outside the organization rather than to someone inside the organization.
 Several cases of privacy violation are outlined in Offline: Unintentional Disclosures.
Other examples of confidentiality breaches are an employee throwing away a document
containing critical information without shredding it, or a hacker who successfully breaks
into an internal database of a Web-based organization and steals sensitive information
about the clients, such as names, addresses, and credit card numbers.

5
UNIT I

 As a consumer, you give up pieces of confidential information in exchange for


convenience or value almost daily. By using a “members only” card at a grocery store,
you disclose some of your spending habits.
 When you fill out an online survey, you exchange pieces of your personal history for
access to online privileges. The bits and pieces of your information that you disclose are
copied, sold, replicated, distributed, and eventually coalesced into profiles and even
complete dossiers of yourself and your life.
 A similar technique is used in a criminal enterprise called salami theft. A deli worker
knows he or she cannot steal entire salami, but a few slices here or there can be taken
home without notice.
 Integrity Information has integrity when it is whole, complete, and uncorrupted. The
integrity of information is threatened when the information is exposed to corruption,
damage, destruction, or other disruption of its authentic state.
 Corruption can occur while information is being stored or transmitted. Many computer
viruses and worms are designed with the explicit purpose of corrupting data. For this
reason, a key method for detecting a virus or worm is to look for changes in file integrity
as shown by the size of the file.
 Another key method of assuring information integrity is file hashing, in which a file is
read by a special algorithm that uses the value of the bits in the file to compute a single
large number called a hash value.
 The hash value for any combination of bits is unique. If a computer system performs the
same hashing algorithm on a file and obtains a different number than the recorded hash
value for that file, the file has been compromised and the integrity of the information is
lost.
 Information integrity is the cornerstone of information systems, because information is of
no value or use if users cannot verify its integrity. File corruption is not necessarily the
result of external forces, such as hackers.
 Noise in the transmission media, for instance, can also cause data to lose its integrity.
Transmitting data on a circuit with a low voltage level can alter and corrupt the data.
Redundancy bits and check bits can compensate for internal and external threats to the
integrity of information.
 During each transmission, algorithms, hash values, and the error-correcting codes ensure
the integrity of the information. Data whose integrity has been compromised is
retransmitted.

6
UNIT I

4. Utility The utility of information is the quality or state of having value for some purpose
or end. Information has value when it can serve a purpose. If information is available, but
is not in a format meaningful to the end user, it is not useful.
5. Possession The possession of information is the quality or state of ownership or control.
Information is said to be in one’s possession if one obtains it, independent of format or
other characteristics. While a breach of confidentiality always results in a breach of
possession, a breach of possession does not always result in a breach of confidentiality.
NSTISSC SECURITY MODEL
 The definition of information security presented is based in part on the CNSS document
called the National Training Standard for Information Systems Security
Professionals NSTISSI No. 4011.
 This document presents a comprehensive information security model and has become a
widely accepted evaluation standard for the security of information systems. The model,
created by John McCumber in 1991, provides a graphical representation of the
architectural approach widely used in computer and information security; it is now known
as the McCumber Cube.17
 The McCumber Cube in Figure 1-6, shows three dimensions. If extrapolated, the three
dimensions of each axis become a 3 3 3 cube with 27 cells representing areas that must be
addressed to secure today’s information systems.
 To ensure system security, each of the 27 areas must be properly addressed during the
security process. For example, the intersection between technology, integrity, and storage
requires a control or safeguard that addresses the need to use technology to protect the
integrity of information while in storage.

7
UNIT I

 What is commonly left out of such a model is the need for guidelines and policies that
provide direction for the practices and implementations of technologies.
COMPONENTS OF AN INFORMATION SYSTEM
As shown in Figure 1-7, an information system (IS) is much more than computer hardware;
it is the entire set of software, hardware, data, people, procedures, and networks that make
possible the use of information resources in the organization. These six critical components
enable information to be input, processed, output, and stored. Each of these IS
components has its own strengths and weaknesses, as well as its own characteristics and uses.
Each component of the information system also has its own security requirements.
Software
 The software component of the IS comprises applications, operating systems, and
assorted command utilities. Software is perhaps the most difficult IS component to
secure.
 The exploitation of errors in software programming accounts for a substantial portion of
the attacks on information. In fact, many facets of daily life are affected by buggy
software, from smartphones that crash to flawed automotive control computers that lead
to recalls.
 Software carries the lifeblood of information through an organization. Unfortunately,
software programs are often created under the constraints of project management, which
limit time, cost, and manpower.

8
UNIT I

Hardware
 Hardware is the physical technology that houses and executes the software, stores and
transports the data, and provides interfaces for the entry and removal of information from
the system.
 Physical security policies deal with hardware as a physical asset and with the protection
of physical assets from harm or theft. Applying the traditional tools of physical security,
such as locks and keys, restricts access to and interaction with the hardware components
of an information system.
 Securing the physical location of computers and the computers themselves is important
because a breach of physical security can result in a loss of information. Unfortunately,
most information systems are built on hardware platforms that cannot guarantee any level
of information security if unrestricted access to the hardware is possible.
 Before September 11, 2001, laptop thefts in airports were common. A two-person team
worked to steal a computer as its owner passed it through the conveyor scanning devices.
 The first perpetrator entered the security area ahead of an unsuspecting target and quickly
went through. Then, the second perpetrator waited behind the target until the target placed
his/her computer on the baggage scanner. As the computer was whisked through, the
second agent slipped ahead of the victim and entered the metal detector with a substantial
collection of keys, coins, and the like, thereby slowing the detection process and allowing
the first perpetrator to grab the computer and disappear in a crowded walkway. While the
security response to September 11, 2001 did tighten the security process at airports,
hardware can still be stolen in airports and other public places.
 Although laptops and notebook computers are worth a few thousand dollars, the
information contained in them can be worth a great deal more to organizations and
individuals.
Data
 Data stored, processed, and transmitted by a computer system must be protected. Data is
often the most valuable asset possessed by an organization and it is the main target of
intentional attacks. Systems developed in recent years are likely to make use of database
Introduction to Information Security 17management systems. When done properly, this
should improve the security of the data and the application.
 Unfortunately, many system development projects do not make full use of the database
management system’s security capabilities, and in some cases the database is
implemented in ways that are less secure than traditional file systems.

9
UNIT I

People
 Though often overlooked in computer security considerations, people have always been a
threat to information security.
 Around 1275 A.D., Kublai Khan finally achieved what the Huns had been trying for
thousands of years. Initially, the Khan’s army tried to climb over, dig under, and break
through the wall. In the end, the Khan simply bribed the gatekeeper—and the rest is
history. Whether this event actually occurred or not, the moral of the story is that people
can be the weakest link in an organization’s information security program.
 And unless policy, education and training, awareness, and technology are properly
employed to prevent people from accidentally or intentionally damaging or losing
information, they will remain the weakest link.
Procedures
 Another frequently overlooked component of an IS is procedures. Procedures are written
instructions for accomplishing a specific task. When an unauthorized user obtains an
organization’s procedures, this poses a threat to the integrity of the information.
 For example, a consultant to a bank learned how to wire funds by using the computer
center’s procedures, which were readily available. By taking advantage of a security
weakness (lack of authentication), this bank consultant ordered millions of dollars to be
transferred by wire to his own account.
 Most organizations distribute procedures to their legitimate employees so they can access
the information system, but many of these companies often fail to provide proper
education on the protection of the procedures.
 Educating employees about safeguarding procedures is as important as physically
securing the information system. After all, procedures are information in their own right.
Therefore, knowledge of procedures, as with all critical information, should be
disseminated among members of the organization only on a need-to-know basis.
Networks
 The IS component that created much of the need for increased computer and information
security is networking.
 When information systems are connected to each other to form local area networks
(LANs), and these LANs are connected to other networks such as the Internet, new
security challenges rapidly emerge.
 The physical technology that enables network functions is becoming more and more
accessible to organizations of every size. Applying the traditional tools of physical

10
UNIT I

security, such as locks and keys, to restrict access to and interaction with the hardware
components of an information system are still important; but when computer systems are
networked, this approach is no longer enough.
Balancing Information Security and Access
 Even with the best planning and implementation, it is impossible to obtain perfect
information security. Information security cannot be absolute: it is a process, not a goal. It
is possible to make a system available to anyone, anywhere, anytime, through any means.
 However, such unrestricted access poses a danger to the security of the information. On
the other hand, a completely secure information system would not allow anyone access.
For instance, when challenged to achieve a TCSEC C-2 level security certification for its
Windows operating system, Microsoft had to remove all networking components and
operate the computer from only the console in a secured room.
 To achieve balance—that is, to operate an information system that satisfies the user and
the security professional—the security level must allow reasonable access, yet protect
against threats.
 Figure 1-8 shows some of the competing voices that must be considered when balancing
information security and access. Because of today’s security concerns and issues, an
information system or data-processing department can get too entrenched in the
management and protection of systems.
 An imbalance can occur when the needs of the end user are undermined by too heavy a
focus on protecting and administering the information systems. Both information security
technologists and end users must recognize that both groups share the same overall goals
of the organization—to ensure the data is available when, where, and how it is needed,
with minimal delays or obstacles. In an ideal world, this level of availability can be met
even after concerns about loss, damage, interception, or destruction have been addressed.

11
UNIT I

THE SDLC
The Systems Development Life Cycle
 Information security must be managed in a manner similar to any other major system
implemented in an organization. One approach for implementing an information security
system in an organization with little or no formal security in place is to use a variation of
the systems development life cycle (SDLC): the security systems development life cycle
(SecSDLC). To understand a security systems development life cycle, you must first
understand the basics of the method upon which it is based.

Methodology and Phases


 The systems development life cycle (SDLC) is a methodology for the design and
implementation of an information system. A methodology is a formal approach to
solving a problem by means of a structured sequence of procedures.
 Using a methodology ensures a rigorous process with a clearly defined goal and increases
the probability of success. Once a methodology has been adopted, the key milestones are
established and a team of individuals is selected and made accountable for accomplishing
the project goals.
 The traditional SDLC consists of six general phases. If you have taken a system analysis
and design course, you may have been exposed to a model consisting of a different
number of phases. SDLC models range from having three to twelve phases, all of which
have been mapped into the six presented here.
 The waterfall model pictured in Figure 1-10 illustrates that each phase begins with the
results and information gained from the previous phase. At the end of each phase comes a

12
UNIT I

structured review or reality check, during which the team determines if the project should
be continued, discontinued, outsourced, postponed, or returned to an earlier phase
depending on whether the project is proceeding as expected and on the need for additional
expertise, organizational knowledge, or other resources.
 Once the system is implemented, it is maintained (and modified) over the remainder of its
operational life. Any information systems implementation may have multiple iterations as
the cycle is repeated over time.
 Only by means of constant examination and renewal can any system, especially an
information security program, perform up to expectations in the constantly changing
environment in which it is placed. The following sections describe each phase of the
traditional SDLC.20
Investigation
 The first phase, investigation, is the most important. What problem is the system being
developed to solve? The investigation phase begins with an examination of the event or
plan that initiates the process. During the investigation phase, the objectives, constraints,
and scope of the project are specified.
 A preliminary cost-benefit analysis evaluates the perceived benefits and the appropriate
levels of cost for those benefits. At the conclusion of this phase, and at every phase
following, a feasibility analysis assesses the economic, technical, and behavioral
feasibilities of the process and ensures that implementation is worth the organization’s
time and effort.
Analysis
 The analysis phase begins with the information gained during the investigation phase.
This phase consists primarily of assessments of the organization, its current systems, and
its capability to support the proposed systems. Analysts begin by determining what the
new system is expected to do and how it will interact with existing systems. This phase
ends with the documentation of the findings and an update of the feasibility analysis.
Logical Design
 In the logical design phase, the information gained from the analysis phase is used to
begin creating a systems solution for a business problem. In any systems solution, it is
imperative that the first and driving factor is the business need.
 Based on the business need, applications are selected to provide needed services, and then
data support and structures capable of providing the needed inputs are chosen. Finally,
based on all of the above, specific technologies to implement the physical solution are

13
UNIT I

delineated. The logical design is, therefore, the blueprint for the desired solution. The
logical design is implementation independent, meaning that it contains no reference to
specific technologies, vendors, or products.
 It addresses, instead, how the proposed system will solve the problem at hand. In this
stage, analysts generate a number of alternative solutions, each with corresponding
strengths and weaknesses, and costs and benefits, allowing for a general comparison of
available options. At the end of this phase, another feasibility analysis is performed.
Physical Design
 During the physical design phase, specific technologies are selected to support the
alternatives identified and evaluated in the logical design. The selected components are
evaluated based on a make-or-buy decision (develop the components in-house or
purchase them from a vendor).
 Final designs integrate various components and technologies. After yet another feasibility
analysis, the entire solution is presented to the organizational management for approval.
Implementation
 In the implementation phase, any needed software is created. Components are ordered,
received, and tested. Afterward, users are trained and supporting documentation created.
Once all components are tested individually, they are installed and tested as a system.
Again a feasibility analysis is prepared, and the sponsors are then presented with the
system for a performance review and acceptance test.
Maintenance and Change
 The maintenance and change phase is the longest and most expensive phase of the
process. This phase consists of the tasks necessary to support and modify the system for
the remainder of its useful life cycle. Even though formal development may conclude
during this phase, the life cycle of the project continues until it is determined that the
process should begin again from the investigation phase.
 At periodic points, the system is tested for compliance, and the feasibility of continuance
versus discontinuance is evaluated. Upgrades, updates, and patches are managed. As the
needs of the organization change, the systems that support the organization must also
change.
 It is imperative that those who manage the systems, as well as those who support them,
continually monitor the effectiveness of the systems in relation to the organization’s
environment. When a current system can no longer support the evolving mission of the
organization, the project is terminated and a new project is implemented.

14
UNIT I

Securing the SDLC


 Each of the phases of the SDLC should include consideration of the security of the
system being assembled as well as the information it uses. Whether the system is custom
and built from scratch, is purchased and then customized, or is commercial off-the-shelf
software (COTS), the implementing organization is responsible for ensuring it is used
securely. This means that each implementation of a system is secure and does not risk
compromising the confidentiality, integrity, and availability of the organization’s
information assets.
 Each of the example SDLC phases includes a minimum set of security steps needed to
effectively incorporate security into a system during its development. An organization
will either use the general SDLC described [earlier] or will have developed a tailored
SDLC that meets their specific needs. In either case, NIST recommends that
organizations incorporate the associated IT security steps of this general SDLC into their
development process:

Investigation/Analysis Phases
 Security categorization—defines three levels (i.e., low, moderate, or high) of potential
impact on organizations or individuals should there be a breach of security (a loss of
confidentiality, integrity, or availability). Security categorization standards assist
organizations in making the appropriate selection of security controls for their
information systems.
 Preliminary risk assessment—results in an initial description of the basic security needs
of the system. A preliminary risk assessment should define the threat environment in
which the system will operate.

15
UNIT I

Logical/Physical Design Phases


 Risk assessment—analysis that identifies the protection requirements for the system
through a formal risk assessment process. This analysis builds on the initial risk
assessment performed during the Initiation phase, but will be more in-depth and specific.
 Security functional requirements analysis—analysis of requirements that may include
the following components:
(1) System security environment (i.e., enterprise information security policy and
enterprise security architecture) and
(2) Security functional requirements Security assurance requirements analysis—analysis
of requirements that address the developmental activities required and assurance evidence
needed to produce the desired level of confidence that the information security will work
correctly and effectively. The analysis, based on legal and functional security
requirements, will be used as the basis for determining how much and what kinds of
assurance are required.
 Cost considerations and reporting—determines how much of the development cost can
be attributed to information security over the life cycle of the system. These costs include
hardware, software, personnel, and training.
 Security planning—ensures those agreed upon security controls, planned or in place, are
fully documented. The security plan also provides a complete characterization or
description of the information system as well as attachments or references to key
documents supporting the agency’s information security program (e.g., configuration
management plan, contingency plan, incident response plan, security awareness and
training plan, rules of behavior, risk assessment, security test and evaluation results,
system interconnection agreements, security authorizations/ accreditations, and plan of
action and milestones).
 Security control development—ensures that security controls described in the respective
security plans are designed, developed, and implemented. For information systems
currently in operation, the security plans for those systems may call for the development
of additional security controls to supplement the controls already in place or the
modification of selected controls that are deemed to be less than effective.
 Developmental security test and evaluation—ensures that security controls developed for
a new information system are working properly and are effective. Some types of security
controls (primarily those controls of a non-technical nature) cannot be tested and
evaluated until the information system is deployed—these controls are typically

16
UNIT I

management and operational controls. Other planning components — ensure that all
necessary components of the development process are considered when incorporating
security into the life cycle. These components include selection of the appropriate
contract type, participation by all necessary functional groups within an organization,
participation by the certifier and accreditor, and development and execution of necessary
contracting plans and processes.
Implementation Phase
 Inspection and acceptance—ensures that the organization validates and verifies that the
functionality described in the specification is included in the deliverables. System
integration—ensures that the system is integrated at the operational site where the
information system is to be deployed for operation.
 Security control settings and switches are enabled in accordance with vendor instructions
and available security implementation guidance. Security certification—ensures that the
controls are effectively implemented through established verification techniques and
procedures and gives organization officials confidence that the appropriate safeguards and
countermeasures are in place to protect the organization’s information system.
 Security certification also uncovers and describes the known vulnerabilities in the
information system. Security accreditation—provides the necessary security authorization
of an information system to process, store, or transmit information that is required. This
authorization is granted by a senior organization official and is based on the verified
effectiveness of security controls to some agreed upon level of assurance and an
identified residual risk to agency assets or operations.
Maintenance and Change Phase
 Configuration management and control—ensures adequate consideration of the potential
security impacts due to specific changes to an information system or its surrounding
environment. Configuration management and configuration control procedures are critical
to establishing an initial baseline of hardware, software, and firmware components for the
information system and subsequently controlling and maintaining an accurate inventory
of any changes to the system.
 Continuous monitoring—ensures that controls continue to be effective in their application
through periodic testing and evaluation. Security control monitoring (i.e., verifying the
continued effectiveness of those controls over time) and reporting the security status of
the information system to appropriate agency officials is an essential activity of a
comprehensive information security program. Information preservation—ensures that

17
UNIT I

information is retained, as necessary, to conform to current legal requirements and to


accommodate future technology changes that may render the retrieval method obsolete.
Media sanitization—ensures that data is deleted, erased, and written over as necessary.
 Hardware and software disposal—ensures that hardware and software is disposed of as
directed by the information system security officer. Adapted from Security
Considerations in the Information System Development Life Cycle.21
 It is imperative that information security be designed into a system from its inception,
rather than added in during or after the implementation phase. Information systems that
were designed with no security functionality, or with security functions added as an
afterthought, often require constant patching, updating, and maintenance to prevent risk to
the systems and information.
 It is a well-known adage that “an ounce of prevention is worth a pound of cure.” With this
in mind, organizations are moving toward more security-focused development
approaches, seeking to improve not only the functionality of the systems they have in
place, but consumer confidence in their products. In early 2002, Microsoft effectively
suspended development work on many of its products while it put its OS developers,
testers, and program managers through an intensive program focusing on secure software
development.
 It also delayed release of its flagship server operating system to address critical security
issues. Many other organizations are following Microsoft’s recent lead in putting security
into the development process.

***************

18
UNIT II SECURITY INVESTIGATION
Need for Security, Business Needs, Threats, Attacks, Legal, Ethical and
Professional Issues - An Overview of Computer Security - Access Control
Matrix, Policy-Security policies, Confidentiality policies, Integrity policies
and Hybrid policies
Security Investigation:
Investigate suspicious email. Investigate suspicious logins, system behavior, and networking
traffic. Pull forensic system images for certain system breaches, legal issues, or to determine
risk/exposure when a system is compromised or stolen.

Need for Security:


Information security is the avoidance and protection of computer assets from unauthorized
access, use, modification, degradation, destruction, and multiple threats. There are two main
sub-types including physical and logical. Physical information security contains tangible
protection devices. Logical information security contains non-physical protection.
Information security defines protecting information and information systems from
unauthorized access, use, acknowledgment, disruption, alteration or destruction.
Governments, military, financial institutions, hospitals, and private businesses amass a big
deal of confidential data about their employees, users, products, research and monetary
status.
Computer systems are vulnerable to several threats that can inflict multiple types of damage
resulting in essential losses. This damage can area from errors damage database probity to
fires spoiling whole computer centers. Losses can stem from the elements of probably
trusted employees cheating a system, from external hackers, or from careless data entry
assistant.
Information assets are essential to any business and vital to the survival of some organization
in globalize digital economy. Information leak is unacceptable. Confidential data about a
businesses users or finances or new product line fall into the hands of a competitor, including
breach of security can lead to lost business, law suits or even failure of the business.
An information leak denotes that security measures were not implemented. Improper
information security hurts both users and merchant. A security breach is not best for anyone.
Information security is the only thing that maintains computerized commerce running.
Security breach can break the confidence of the user. It can take long time to reconstruct that
trust. Information security is needed for the goodwill of the business. Hence companies are
thinking about compute information security on the basis of a possible breach.
Information security is needed because some organizations can be damaged by hostile
application or intruders. There can be multiple forms of damage which are interrelated.
These includes −
 It can be damage or destruction of computer systems.
 It can be damage or destruction of internal data.
 It can be used to loss of sensitive information to hostile parties.
 It is the use of sensitive information to steal items of monetary value.
 It is the use of sensitive information against the organization’s customers which may
result in legal action by customers against the organization and loss of customers.
 It is used to damage to the reputation of an organization.
 It can be used to monetary damage due to loss of sensitive information, destruction of
data, hostile use of sensitive data, or damage to the organization’s reputation.

Business Needs:

 Protecting Confidential Information: Confidential information, such as personal


data, financial records, trade secrets, and intellectual property, must be kept secure
to prevent it from falling into the wrong hands. This type of information is valuable
and can be used for identity theft, fraud, or other malicious purposes.
 Complying with Regulations: Many industries, such as healthcare, finance, and
government, are subject to strict regulations and laws that require them to protect
sensitive data. Failure to comply with these regulations can result in legal and
financial penalties, as well as damage to the organization’s reputation.

 Maintaining Business Continuity: Information security helps ensure that critical


business operations can continue in the event of a disaster, such as a cyber-attack or
natural disaster. Without proper security measures in place, an organization’s data
and systems could be compromised, leading to significant downtime and lost
revenue.

 Protecting Customer Trust: Customers expect organizations to keep their data


safe and secure. Breaches or data leaks can erode customer trust, leading to a loss of
business and damage to the organization’s reputation.

 Preventing Cyber-attacks: Cyber-attacks, such as viruses, malware, phishing, and


ransomware, are becoming increasingly sophisticated and frequent. Information
security helps prevent these attacks and minimizes their impact if they do occur.

 Protecting Employee Information: Organizations also have a responsibility to


protect employee data, such as payroll records, health information, and personal
details. This information is often targeted by cybercriminals, and its theft can lead to
identity theft and financial fraud.

Threat:
Information Security threats can be many like Software attacks, theft of intellectual
property, identity theft, theft of equipment or information, sabotage, and information
extortion.
Threat can be anything that can take advantage of a vulnerability to breach security and
negatively alter, erase, harm object or objects of interest.
Threats to Information Security:
1. Types of Cyber Attacks:
Cyber attacks are a major threat to information security and can take many forms,
including:
 Malware: Malicious software designed to damage or disrupt computer systems. This
includes viruses, worms, and Trojans.
 Phishing: Fraudulent emails or websites designed to trick users into disclosing sensitive
information such as passwords or credit card numbers.
 Denial of Service (DoS) attacks: Attacks that aim to make a system or network
unavailable to its intended users by overwhelming it with traffic.
 Ransomware: Malware that encrypts files on a computer system and demands a
ransom payment in exchange for the decryption key.
 Social engineering: The use of psychological manipulation to trick individuals into
disclosing sensitive information or performing actions that compromise security.
2. Risks posed by Cyber Attacks:
Cyber attacks pose a significant risk to organizations and individuals. Some of the risks
posed by these attacks include:
 Data Loss: Cyber attacks can result in the theft or destruction of sensitive information,
leading to data loss.
 Reputation Damage: Cyber attacks can damage an organization’s reputation and
credibility, which can be difficult and expensive to repair.

Attacks:
Attacks are defined as passive and active. A passive attack is an attempt to understand or
create use of data from the system without influencing system resources; whereas an active
attack is an attempt to change system resources or influence their operation.
Passive Attacks − Passive attacks are in the feature of eavesdropping on, or observation of,
transmissions. The objective of the opponent is to access data that is being transmitted. There
are two method of passive attacks are release of message contents and traffic analysis.
The release of message contents is simply learn. A telephone chat, an electronic mail
message, and a transferred file can include sensitive or confidential data. It is like to avoid an
opponent from understanding the contents of these transmissions.
A second method of passive attack are traffic analysis. Assume that it is an approach of
hiding the contents of messages or some information traffic so that opponents, even if they
acquired the message, could not extract the data from the message.
The general approach for masking contents is encryption. If it can have an encryption
security in area, an opponent can be able to find the duplicate of these messages.
The opponent can decide the location and identity of broadcasting hosts and can detect the
frequency and magnitude of messages being exchanged. This data can be beneficial in
guessing the feature of the communication that was creating place.
Active Attacks − Active attacks contains some modification of the data flow or the creation
of a false flow and can be subdivided into four elements such as masquerade, replay,
modification of messages, and denial of service.
Replay − Replay contains the passive capture of an information unit and its consecutive
retransmission to create an unauthorized development.
Masquerade − A masquerade creates place when one entity impersonate to be a various
entity. A masquerade attack generally involves one of the multiple forms of active attack.
For instance, authentication array can be captured and replayed after a true authentication
array has taken place, therefore allowing an authorized entity with some privileges to acquire
more privileges by imitate an entity that has those privileges.
Modification of messages − Modification of message simply defines that some portion of a
legitimate message is transformed, or that messages are held up or reordered, to make an
unauthorized effect.
Denial of Service − The denial of service avoids or prevent the general use or administration
of communications facilities. This attack can have a definite focus. For instance, an entity
can suppress some messages supervised to a specific destination.
Software attacks means attack by Viruses, Worms, Trojan Horses etc. Many users believe
that malware, virus, worms, bots are all same things. But they are not same, only similarity
is that they all are malicious software that behaves differently.
Malware is a combination of 2 terms- Malicious and Software. So Malware basically
means malicious software that can be an intrusive program code or anything that is
designed to perform malicious operations on system. Malware can be divided in 2
categories:

1)Infection Methods
2)Malware Actions
Malware on the basis of Infection Method are following:

1. Virus – They have the ability to replicate themselves by hooking them to the program
on the host computer like songs, videos etc and then they travel all over the Internet.
The Creeper Virus was first detected on ARPANET. Examples include File Virus,
Macro Virus, Boot Sector Virus, Stealth Virus etc.
2. Worms – Worms are also self-replicating in nature but they don’t hook themselves to
the program on host computer. Biggest difference between virus and worms is that
worms are network-aware. They can easily travel from one computer to another if
network is available and on the target machine they will not do much harm, they will,
for example, consume hard disk space thus slowing down the computer.
3. Trojan – The Concept of Trojan is completely different from the viruses and worms.
The name Trojan is derived from the ‘Trojan Horse’ tale in Greek mythology, which
explains how the Greeks were able to enter the fortified city of Troy by hiding their
soldiers in a big wooden horse given to the Trojans as a gift. The Trojans were very
fond of horses and trusted the gift blindly. In the night, the soldiers emerged and
attacked the city from the inside.
Their purpose is to conceal themselves inside the software that seem legitimate and
when that software is executed they will do their task of either stealing information or
any other purpose for which they are designed.
They often provide backdoor gateway for malicious programs or malevolent users to
enter your system and steal your valuable data without your knowledge and permission.
Examples include FTP Trojans, Proxy Trojans, Remote Access Trojans etc.

4. Bots –: can be seen as advanced form of worms. They are automated processes that are
designed to interact over the internet without the need for human interaction. They can
be good or bad. Malicious bot can infect one host and after infecting will create
connection to the central server which will provide commands to all infected hosts
attached to that network called Botnet.

Malware on the basis of Actions:

1. Adware – Adware is not exactly malicious but they do breach privacy of the users.
They display ads on a computer’s desktop or inside individual programs. They come
attached with free-to-use software, thus main source of revenue for such developers.
They monitor your interests and display relevant ads. An attacker can embed malicious
code inside the software and adware can monitor your system activities and can even
compromise your machine.
2. Spyware – It is a program or we can say software that monitors your activities on
computer and reveal collected information to an interested party. Spyware are generally
dropped by Trojans, viruses or worms. Once dropped they install themselves and sits
silently to avoid detection.
One of the most common example of spyware is KEYLOGGER. The basic job of
keylogger is to record user keystrokes with timestamp. Thus capturing interesting
information like username, passwords, credit card details etc.
3. Ransomware – It is type of malware that will either encrypt your files or will lock your
computer making it inaccessible either partially or wholly. Then a screen will be
displayed asking for money i.e. ransom in exchange.
4. Scareware – It masquerades as a tool to help fix your system but when the software is
executed it will infect your system or completely destroy it. The software will display a
message to frighten you and force to take some action like pay them to fix your system.
5. Rootkits – are designed to gain root access or we can say administrative privileges in
the user system. Once gained the root access, the exploiter can do anything from
stealing private files to private data.
6. Zombies – They work similar to Spyware. Infection mechanism is same but they don’t
spy and steal information rather they wait for the command from hackers.

 Theft of intellectual property means violation of intellectual property rights like


copyrights, patents etc.
 Identity theft means to act someone else to obtain person’s personal information or to
access vital information they have like accessing the computer or social media account
of a person by login into the account by using their login credentials.
 Theft of equipment and information is increasing these days due to the mobile nature
of devices and increasing information capacity.
 Sabotage means destroying company’s website to cause loss of confidence on part of
its customer.
 Information extortion means theft of company’s property or information to receive
payment in exchange. For example ransomware may lock victims file making them
inaccessible thus forcing victim to make payment in exchange. Only after payment
victim’s files will be unlocked.
These are the old generation attacks that continue these days also with advancement every
year. Apart from these there are many other threats. Below is the brief description of these
new generation threats.
 Technology with weak security – With the advancement in technology, with every
passing day a new gadget is being released in the market. But very few are fully secured
and follows Information Security principles. Since the market is very competitive
Security factor is compromised to make device more up to date. This leads to theft of
data/ information from the devices
 Social media attacks – In this cyber criminals identify and infect a cluster of websites
that persons of a particular organization visit, to steal information.
 Mobile Malware –There is a saying when there is a connectivity to Internet there will
be danger to Security. Same goes for Mobile phones where gaming applications are
designed to lure customer to download the game and unintentionally they will install
malware or virus on the device.
 Outdated Security Software – With new threats emerging everyday, updation in
security software is a prerequisite to have a fully secured environment.
 Corporate data on personal devices – These days every organization follows a rule
BYOD. BYOD means Bring your own device like Laptops, Tablets to the workplace.
Clearly BYOD pose a serious threat to security of data but due to productivity issues
organizations are arguing to adopt this.
 Social Engineering – is the art of manipulating people so that they give up their
confidential information like bank account details, password etc. These criminals can
trick you into giving your private and confidential information or they will gain your
trust to get access to your computer to install a malicious software- that will give them
control of your computer. For example email or message from your friend, that was
probably not sent by your friend. Criminal can access your friends device and then by
accessing the contact list, he can send infected email and message to all contacts. Since
the message/ email is from a known person recipient will definitely check the link or
attachment in the message, thus unintentionally infecting the computer.

Legal ,ethical and professional issues in information security:

 LawandEthicsinInformationSecurity

 Laws are rules that mandate or prohibit certain


behavior in society; they are drawn fromethics, which
define socially acceptable behaviors. The key
difference between laws andethics is that laws carry the
sanctions of a governing authority and ethics do not.
Ethics inturnarebased onCultural mores.
 Typesof Law

o Civillaw

o Criminallaw

o Tortlaw

o Privatelaw

o Publiclaw

 RelevantU.S.Laws–General

 ComputerFraudandAbuseActof1986
 NationalInformationInfrastructureProtectionActof1996
 USAPatriotActof2001
 TelecommunicationsDeregulationandCompetitionActof1996
 CommunicationsDecencyAct(CDA)
 ComputerSecurityActof1987

 Privacy

 Theissueofprivacyhasbecomeoneofthehottesttopicsininformatio
n
 The ability to collect information on an individual,
combine facts from separate sources,and merge it with
other information has resulted in databases of
information that werepreviouslyimpossibleto set up
 The aggregation of data from multiple sources permits
unethical organizations to builddatabasesoffacts
withfrighteningcapabilities

 PrivacyofCustomerInformation

 PrivacyofCustomerInformationSectionofCommonCarrierRegul
ations
 FederalPrivacyActof1974
 TheElectronicCommunicationsPrivacyActof1986
 TheHealthInsurancePortability&AccountabilityActOf1
996(HIPAA)alsoknownastheKennedy-KassebaumAct
 TheFinancialServicesModernizationActorGramm-Leach-
BlileyActof1999
KeyU.SLawsofInteresttoInformationSecurityProfessionals:

ACT SUBJECT DATE DESCRIPTION

CommunicationsAct Telecommunications 1934 Regulates interstate


of1934,updatedbyTe andforeignTele
lecommunicationsD communications.
eregulation
&
CompetitionAct

ComputerFraud&Ab Threatsco to 1986 Definesandformalizeslawst


useAct mputers ocounterthreatsfrom
computer related
actsandoffenses.

ComputerA Security Federal 1987 Requiresallfederalcomputer


ctof1987 Agency systemsthatcontain
InformationSecurity classified
information to have
suretyplans in place, and
requiresperiodicsecuritytrai
ningforallindividualswhoop
erate,design,ormanage
suchsystems.

Economic Tradesecrets. 1996 Designed to prevent


Espionage Act of abuseof information gained
1996 by
anindividualworkinginonec
ompany and employed
byanother.

ElectronicCommuni Cryptography 1986 AlsoreferredtoastheFederal


cationsPrivacyActo WiretappingAct;regulatesin
f1986 terceptionanddisclosureofel
ectronicinformation.

FederalPrivacyActof Privacy 1974 Governsfederalagencyuseof


1974 personalinformation.

Gramm-Leach- Banking 1999 Focusesonfacilitatingaffiliat


BlileyActof1999 ionamongbanks,insurancea
ndsecuritiesfirms;ithassigni
ficant
impactontheprivacyof
personalinformationusedbyt
heseindustries.

HealthInsurancePort Healthcareprivacy 1996 Regulates


ability collection,
and storage,andtransmissionof
AccountabilityAct sensitive personal
healthcareinformation.

NationalInformation Criminalintent 1996 Categorizedcrimesbasedon


Infrastructureprotect defendant’s authority
ion Act toaccesscomputerandcrimin
of1996 alintent.

Sarbanes- FinancialReporting 2002 Affectshowpublicorganizati


OxleyActof2002 ons
and
accounting firms deal
withcorporategovernance,fi
nancialdisclosure,and
thepracticeofpublicaccounti
ng.

Security and Useandsaleofsoftwar 1999 Clarifies use of


FreedomthroughEnc e that uses encryptionforpeopleintheU
ryptionActof1999 orenablesencryption. nitedstatesandpermitsallper
sonsintheU.S.tobuyorsellan
yencryptionproduct and
states that thegovernment
cannot requirethe use of
any kind of
keyescrowsystemforencrypt
ionproducts.

U.S.A.PatriotActof Terrorism 2001 Defines stiffer penalties


2001 forprosecutionofterroristcri
mes.

 Exportand EspionageLaws

 EconomicEspionageAct (EEA)of1996
 SecurityandFreedomThrough
EncryptionActof1997(SAFE)
 USCopyrightLaw

 IntellectualpropertyisrecognizedasaprotectedassetintheUS
 UScopyrightlawextendsthisrighttothepublishedword,includinge
lectronicformats
 Fairuseofcopyrightedmaterialsincludes
 theusetosupportnewsreporting,teaching,scholarship,andanumberofotherrelated
permissions
 thepurposeoftheusehastobeforeducationalorlibrarypurposes,notforprofit,and
should not beexcessive

 FreedomofInformationActof1966(FOIA)

 The Freedom of Information Act provides any person


with the right to request access
tofederalagencyrecordsorinformation,notdeterminedtob
eofnationalsecurity

- USGovernmentagenciesarerequiredtodiscloseanyrequestedinformation
onreceipt ofawritten request

 There are exceptions for information that is protected


from disclosure, and the Act doesnot apply to state or
local government agencies or to private businesses or
individuals,althoughmanystates havetheirown version
oftheFOIA

 State&LocalRegulations

 In addition to the national and international restrictions


placed on an organization in theuse of computer
technology, each state or locality may have a number
of laws andregulationsthat impact operations

 It is the responsibility of the information security professional to understand


state laws andregulationsandinsure the organization’ssecurity
policiesandprocedurescomply withthoselawsand regulations

InternationalLawsandLegalBodies

 RecentlytheCouncilofEuropedraftedtheEuropeanCoun
cilCyber-CrimeConvention,designed

 tocreateaninternationaltaskforcetooverseearangeofsecurityfunctionsassociated
withInternet activities,
 tostandardizetechnologylawsacrossinternationalborder

 It also attempts to improve the effectiveness of


international investigations into
breachesoftechnologylaw
 This convention is well received by advocates of
intellectual property rights with itsemphasison
copyright infringementprosecution
DigitalMillenniumCopyrightAct(DMCA) DigitalMillenniumCopyrightAct(DMCA)

 TheDigitalMillenniumCopyrightAct(DMCA)istheUSversionofaninternatio
nalefforttoreducetheimpactofcopyright,trademark,andprivacyinfringement
 TheEuropeanUnionDirective95/46/ECincreasesprotectionofindividualswith
regardtotheprocessingofpersonaldataandlimits thefreemovementofsuchdata
 TheUnitedKingdomhasalreadyimplementedaversionofthisdirectivecalledthe
DatabaseRight

UnitedNationsCharter

 TosomedegreetheUnitedNationsCharterprovidesprovisionsforinformation
securityduring Information Warfare
 InformationWarfare(IW)involvestheuseofinformationtechnologytoconducto
ffensive operations as part of an organized and lawful military operation by
a sovereignstate
 IW is a relatively new application of warfare, although the military has
been conductingelectronic warfare andcounter-warfare operationsfor
decades, jamming,intercepting,andspoofing enemycommunications

PolicyVersusLaw

 Mostorganizationsdevelopandformalizeabodyofexpectationscalledpolicy
 Policiesfunctioninanorganizationlikelaws
 Forapolicytobecomeenforceable,itmustbe:
- Distributedtoallindividualswhoareexpectedto complywithit
- Readilyavailableforemployeereference
- Easilyunderstoodwithmulti-
languagetranslationsandtranslationsforvisuallyimpaired,orliterac
y-impaired employees
- Acknowledgedbytheemployee,usuallybymeansofasignedconsentform
 Onlywhenallconditionsaremet,doestheorganizationhaveareasonableexpectat
ionofeffectivepolicy

Ethical Concepts in
InformationSecurityCultural
Differencesin EthicalConcepts
 Differencesinculturescauseproblemsindeterminingwhatisethicalandwhatisn
otethical
 Studiesofethicalsensitivitytocomputeruserevealdifferentnationalitieshavedif
ferentperspectives
 Difficultiesarisewhenonenationality’sethicalbehaviourcontradictsthatofanothernatio
nal group
EthicsandEducation

 Employees must be trained and kept aware of a number of topics related to


informationsecurity,nottheleastofwhichistheexpectedbehaviorsofanethicale
mployee

 This is especially important in areas of information security, as many


employees may nothave the formal technical training to understand that
their behavior is unethical or evenillegal
 Proper ethical and legal training is vital to creating an informed, well
prepared, and low-risksystem user

DeterrencetoUnethicalandIllegalBehavior

 Deterrence-preventinganillegalorunethicalactivity
 Laws,policies,andtechnicalcontrolsareallexamplesofdeterrents
 Lawsandpoliciesonlydeterifthreeconditionsarepresent:
- Fearofpenalty
- Probabilityofbeingcaught
- Probabilityofpenaltybeingadministered
An overview of computer security:
Confidentiality: This term covers two related concepts:
Data confidentiality: Assures that private or confidential information is not made available
or disclosed to unauthorized individuals.
Privacy: Assures that individuals control or influence what information related to them may
be collected and stored and by whom and to whom that information may be disclosed.
■Integrity: This term covers two related concepts: Data integrity: Assures that information
(both stored and in transmitted packets) and programs are changed only in a specified and
authorized manner. System integrity: Assures that a system performs its intended function in
an unimpaired manner, free from deliberate or inadvertent unauthorized manipulation of the
system.
■Availability: Assures that systems work promptly and service is not denied to authorized
users. These three concepts form what is often referred to as the CIA triad. The three
concepts embody the fundamental security objectives for both data and for information and
computing services. For example, the NIST standard FIPS 199 (Standards for Security
Categorization of Federal Information and Information Systems) lists confidentiality,
integrity, and availability as the three security objectives for information and for information
systems. FIPS 199 provides a useful characterization of these three objectives in terms of
requirements and the definition of a loss of security in each category:
■Confidentiality: Preserving authorized restrictions on information access and disclosure,
including means for protecting personal privacy and proprietary information. A loss of
confidentiality is the unauthorized disclosure of information.
■ Integrity: Guarding against improper information modification or destruction, including
ensuring information nonrepudiation and authenticity. A loss of integrity is the unauthorized
modification or destruction of information.
■Availability: Ensuring timely and reliable access to and use of information. A loss of
availability is the disruption of access to or use of information or an information system.
■Authenticity: The property of being genuine and being able to be verified and trusted;
confidence in the validity of a transmission, a message, or message originator. This means
verifying that users are who they say they are and that each input arriving at the system came
from a trusted source.
■Accountability: The security goal that generates the requirement for actions of an entity to
be traced uniquely to that entity. This supports nonrepudiation, deterrence, fault isolation,
intrusion detection and prevention, and afteraction recovery and legal action. Because truly
secure systems are not yet an achievable goal, we must be able to trace a security breach to a
responsible party. Systems must keep records of their activities to permit later forensic
analysis to trace security breaches or to aid in transaction disputes.

Challenges for Computer and network security:


Some of the reasons follow:
1. Security is not as simple as it might first appear to the novice. The requirements seem to be
straightforward; indeed, most of the major requirements for security services can be given
self-explanatory, one-word labels: confidentiality, authentication, nonrepudiation, or
integrity. But the mechanisms used to meet those requirements can be quite complex, and
understanding them may involve rather subtle reasoning.
2. In developing a particular security mechanism or algorithm, one must always consider
potential attacks on those security features. In many cases, successful attacks are designed by
looking at the problem in a completely different way, therefore exploiting an unexpected
weakness in the mechanism.
3. Because of point 2, the procedures used to provide particular services are often
counterintuitive. Typically, a security mechanism is complex, and it is not obvious from the
statement of a particular requirement that such elaborate measures are needed. It is only when
the various aspects of the threat are considered that elaborate security mechanisms make
sense.
4. Having designed various security mechanisms, it is necessary to decide where to use them.
This is true both in terms of physical placement (e.g., at what points in a network are certain
security mechanisms needed) and in a logical sense (e.g., at what layer or layers of an
architecture such as TCP/IP [Transmission Control Protocol/Internet Protocol] should
mechanisms be placed).
5. Security mechanisms typically involve more than a particular algorithm or protocol. They
also require that participants be in possession of some secret information (e.g., an encryption
key), which raises questions about the creation, distribution, and protection of that secret
information. There also may be a reliance on communications protocols whose behavior may
complicate the task of developing the security mechanism. For example, if the proper
functioning of the security mechanism requires setting time limits on the transit time of a
message from sender to receiver, then any protocol or network that introduces variable,
unpredictable delays may render such time limits meaningless.
6. Computer and network security is essentially a battle of wits between a perpetrator who
tries to find holes and the designer or administrator who tries to close them. The great
advantage that the attacker has is that he or she need only find a single weakness, while the
designer must find and eliminate all weaknesses to achieve perfect security.
7. There is a natural tendency on the part of users and system managers to perceive little
benefit from security investment until a security failure occurs.
8. Security requires regular, even constant, monitoring, and this is difficult in today’s short-
term, overloaded environment.
9. Security is still too often an afterthought to be incorporated into a system after the design
is complete rather than being an integral part of the design process.
10. Many users and even security administrators view strong security as an impediment to
efficient and user-friendly operation of an information system or use of information.

Access control matrix:

Policy-Security Policies:

A security policy (also called an information security policy or IT security policy) is a


document that spells out the rules, expectations, and overall approach that an organization
uses to maintain the confidentiality, integrity, and availability of its data. Security policies
exist at many different levels, from high-level constructs that describe an enterprise’s general
security goals and principles to documents addressing specific issues, such as remote access
or Wi-Fi use.

Need of Security policies-

1) It increases efficiency.

The best thing about having a policy is being able to increase the level of consistency which
saves time, money and resources. The policy should inform the employees about their
individual duties, and telling them what they can do and what they cannot do with the
organization sensitive information.

2) It upholds discipline and accountability

When any human mistake will occur, and system security is compromised, then the security
policy of the organization will back up any disciplinary action and also supporting a case in a
court of law. The organization policies act as a contract which proves that an organization has
taken steps to protect its intellectual property, as well as its customers and clients.
3) It can make or break a business deal

It is not necessary for companies to provide a copy of their information security policy to
other vendors during a business deal that involves the transference of their sensitive
information. It is true in a case of bigger businesses which ensures their own security interests
are protected when dealing with smaller businesses which have less high-end security
systems in place.

4) It helps to educate employees on security literacy

A well-written security policy can also be seen as an educational document which informs the
readers about their importance of responsibility in protecting the organization sensitive data.
It involves on choosing the right passwords, to providing guidelines for file transfers and data
storage which increases employee's overall awareness of security and how it can be
strengthened.

We use security policies to manage our network security. Most types of security policies are
automatically created during the installation. We can also customize policies to suit our
specific environment. There are some important cybersecurity policies recommendations
describe below-

1. Virus and Spyware Protection policy

This policy provides the following protection:

o It helps to detect, removes, and repairs the side effects of viruses and security risks by
using signatures.
o It helps to detect the threats in the files which the users try to download by using
reputation data from Download Insight.
o It helps to detect the applications that exhibit suspicious behaviour by using SONAR
heuristics and reputation data.

2. Firewall Policy

This policy provides the following protection:

o It blocks the unauthorized users from accessing the systems and networks that connect
to the Internet.
o It detects the attacks by cybercriminals.
o It removes the unwanted sources of network traffic.

3. Intrusion Prevention policy

This policy automatically detects and blocks the network attacks and browser attacks. It also
protects applications from vulnerabilities. It checks the contents of one or more data packages
and detects malware which is coming through legal ways.
4. LiveUpdate policy

This policy can be categorized into two types one is LiveUpdate Content policy, and another
is LiveUpdate Setting Policy. The LiveUpdate policy contains the setting which determines
when and how client computers download the content updates from LiveUpdate. We can
define the computer that clients contact to check for updates and schedule when and how
often clients computer check for updates.

5. Application and Device Control

This policy protects a system's resources from applications and manages the peripheral
devices that can attach to a system. The device control policy applies to both Windows and
Mac computers whereas application control policy can be applied only to Windows clients.

6. Exceptions policy

This policy provides the ability to exclude applications and processes from detection by the
virus and spyware scans.

7. Host Integrity policy

This policy provides the ability to define, enforce, and restore the security of client computers
to keep enterprise networks and data secure. We use this policy to ensure that the client's
computers who access our network are protected and compliant with companies? securities
policies. This policy requires that the client system must have installed antivirus.

Four reasons a security policy is important:

1. Guides the implementation of technical controls:

A security policy doesn’t provide specific low-level technical guidance, but it does spell out
the intentions and expectations of senior management in regard to security. It’s then up to the
security or IT teams to translate these intentions into specific technical actions.

For example, a policy might state that only authorized users should be granted access to
proprietary company information. The specific authentication systems and access control
rules used to implement this policy can change over time, but the general intent remains the
same. Without a place to start from, the security or IT teams can only guess senior
management’s desires. This can lead to inconsistent application of security controls across
different groups and business entities.

2. Sets clear expectations:

Without a security policy, each employee or user will be left to his or her own judgment in
deciding what’s appropriate and what’s not. This can lead to disaster when different
employees apply different standards.
Is it appropriate to use a company device for personal use? Can a manager share passwords
with their direct reports for the sake of convenience? What about installing unapproved
software? Without clear policies, different employees might answer these questions in
different ways. A security policy should also clearly spell out how compliance is monitored
and enforced.

3. Helps meet regulatory and compliance requirements:


Documented security policies are a requirement of legislation like HIPAA and Sarbanes-
Oxley, as well as regulations and standards like PCI-DSS, ISO 27001, and SOC2. Even when
not explicitly required, a security policy is often a practical necessity in crafting a strategy to
meet increasingly stringent security and data privacy requirements.

4. Improves organizational efficiency and helps meet business objectives:

A good security policy can enhance an organization’s efficiency. Its policies get everyone on
the same page, avoid duplication of effort, and provide consistency in monitoring and
enforcing compliance. Security policies should also provide clear guidance for when policy
exceptions are granted, and by whom.

To achieve these benefits, in addition to being implemented and followed, the policy will also
need to be aligned with the business goals and culture of the organization.

Confidentiality policies:
Confidentiality is the protection of information in the system so that an unauthorized
person cannot access it. This type of protection is most important in military and
government organizations that need to keep plans and capabilities secret from enemies.
However, it can also be useful to businesses that need to protect their proprietary trade
secrets from competitors or prevent unauthorized persons from accessing the company’s
sensitive information (e.g., legal, personal, or medical information). Privacy issues have
gained an increasing amount of attention in the past few years, placing the importance of
confidentiality on protecting personal information maintained in automated systems by both
government agencies and private-sector organizations. Confidentiality must be well-
defined, and procedures for maintaining confidentiality must be carefully implemented. A
crucial aspect of confidentiality is user identification and authentication. Positive
identification of each system user is essential in order to ensure the effectiveness of policies
that specify who is allowed access to which data items.
Threats to Confidentiality: Confidentiality can be compromised in several ways. The
following are some of the commonly encountered threats to information confidentiality –
 Hackers
 Masqueraders
 Unauthorized user activity
 Unprotected downloaded files
 Local area networks (LANs)
 Trojan Horses
Confidentiality Models: Confidentiality models are used to describe what actions must be
taken to ensure the confidentiality of information. These models can specify how security
tools are used to achieve the desired level of confidentiality. The most commonly used
model for describing the enforcement of confidentiality is the Bell-LaPadula model.
 In this model the relationship between objects (i.e, the files, records, programs and
equipment that contain or receive information) and subjects (i.e, the person, processes,
or devices that cause the information to flow between the objects).
 The relationships are described in terms of the subject’s assigned level of access or
privilege and the object’s level of sensitivity. In military terms, these would be
described as the security clearance of the subject and the security classification of the
object.
Another type of model that is commonly used is Access control model.
 It organizes the system into objects (i.e, resources being acted on), subjects (i.e, the
person or program doing the action), and operations (i.e, the process of interaction).
 A set of rules specifies which operation can be performed on an object by which subject.
Types of Confidentiality :
In Information Security, there are several types of confidentiality:
1. Data confidentiality: refers to the protection of data stored in computer systems and
networks from unauthorized access, use, disclosure, or modification. This is achieved
through various methods, such as encryption and access controls.
2. Network confidentiality: refers to the protection of information transmitted over
computer networks from unauthorized access, interception, or tampering. This is
achieved through encryption and secure protocols such as SSL/TLS.
3. End-to-end confidentiality: refers to the protection of information transmitted between
two endpoints, such as between a client and a server, from unauthorized access or
tampering. This is achieved through encryption and secure protocols.
4. Application confidentiality: refers to the protection of sensitive information processed
and stored by software applications from unauthorized access, use, or modification. This
is achieved through user authentication, access controls, and encryption of data stored in
the application.
5. Disk and file confidentiality: refers to the protection of data stored on physical storage
devices, such as hard drives, from unauthorized access or theft. This is achieved through
encryption, secure storage facilities, and access controls.
Overall, the goal of confidentiality in Information Security is to protect sensitive and
private information from unauthorized access, use, or modification and to ensure that only
authorized individuals have access to confidential information.
Uses of Confidentiality :
In the field of information security, confidentiality is used to protect sensitive data and
information from unauthorized access and disclosure. Some common uses include:
1. Encryption: Encrypting sensitive data helps to protect it from unauthorized access and
disclosure.
2. Access control: Confidentiality can be maintained by controlling who has access to
sensitive information and limiting access to only those who need it.
3. Data masking: Data masking is a technique used to obscure sensitive information, such
as credit card numbers or social security numbers, to prevent unauthorized access.
4. Virtual private networks (VPNs): VPNs allow users to securely connect to a network
over the internet and protect the confidentiality of their data in transit.
5. Secure file transfer protocols (SFTPs): SFTPs are used to transfer sensitive data
securely over the internet, protecting its confidentiality in transit.
6. Two-factor authentication: Two-factor authentication helps to ensure that only
authorized users have access to sensitive information by requiring a second form of
authentication, such as a fingerprint or a one-time code.
7. Data loss prevention (DLP): DLP is a security measure used to prevent sensitive data
from being leaked or lost. It monitors and controls the flow of sensitive data, protecting
its confidentiality.
Issues of Confidentiality :
Confidentiality in information security can be challenging to maintain, and there are several
issues that can arise, including:
1. Insider threats: Employees and contractors who have access to sensitive information
can pose a threat to confidentiality if they intentionally or accidentally disclose it.
2. Cyberattacks: Hackers and cybercriminals can exploit vulnerabilities in systems and
networks to access and steal confidential information.
3. Social engineering: Social engineers use tactics like phishing and pretexting to trick
individuals into revealing sensitive information, compromising its confidentiality.
4. Human error: Confidential information can be accidentally disclosed through human
error, such as sending an email to the wrong recipient or leaving sensitive information in
plain sight.
5. Technical failures: Technical failures, such as hardware failures or data breaches, can
result in the loss or exposure of confidential information.
6. Inadequate security measures: Inadequate security measures, such as weak passwords
or outdated encryption algorithms, can make it easier for unauthorized parties to access
confidential information.
7. Legal and regulatory compliance: Confidentiality can be impacted by legal and
regulatory requirements, such as data protection laws, that may require the disclosure of
sensitive information in certain circumstances.

Integrity policies:
Integrity is the protection of system data from intentional or accidental unauthorized
changes. The challenges of the security program are to ensure that data is maintained in the
state that is expected by the users. Although the security program cannot improve the
accuracy of the data that is put into the system by users. It can help ensure that any changes
are intended and correctly applied. An additional element of integrity is the need to protect
the process or program used to manipulate the data from unauthorized modification. A
critical requirement of both commercial and government data processing is to ensure the
integrity of data to prevent fraud and errors. It is imperative, therefore, no user be able to
modify data in a way that might corrupt or lose assets or financial records or render
decision making information unreliable. Examples of government systems in which
integrity is crucial include air traffic control system, military fire control systems, social
security and welfare systems. Examples of commercial systems that require a high level of
integrity include medical prescription system, credit reporting systems, production control
systems and payroll systems.
Protecting against Threats to Integrity: Like confidentiality, integrity can also be
arbitrated by hackers, masqueraders, unprotected downloaded files, LANs, unauthorized
user activities, and unauthorized programs like Trojan Horse and viruses, because each of
these threads can lead to unauthorized changes to data or programs. For example,
unauthorized user can corrupt or change data and programs intentionally or accidentally if
their activities on the system are not properly controlled. Generally, three basic principles
are used to establish integrity controls:
1. Need-to-know access: User should be granted access only into those files and programs
that they need in order to perform their assigned jobs functions.
2. Separation of duties: To ensure that no single employee has control of a transaction
from beginning to end, two or more people should be responsible for performing it.
3. Rotation of duties: Job assignment should be changed periodically so that it becomes
more difficult for the users to collaborate to exercise complete control of a transaction
and subvert it for fraudulent purposes.
Integrity Models – Integrity models are used to describe what needs to be done to enforce
the information integrity policy. There are three goals of integrity, which the models
address in various ways:
1. Preventing unauthorized users from making modifications to data or programs.
2. Preventing authorized users from making improper or unauthorized modifications.
3. Maintaining internal and external consistency of data and programs.

Hybrid policies:

Chinese Wall Model. Security policy that refers equally to confidentiality and integrity.
Describes policies that involve conflict of interest in business.

Def: The objects of the database are items of information related to a company.

Def: A Company Dataset (CD) contains objects related to a single company

Def: A Conflict Of Interest (COI) class contains the datasets of companies in competition

CW-Simple Security Condition:

S can read O iff either 1. There is an object O such that S has accessed O’ and CD(O’) =
CD(O).

2. For all objects O’, O’ PR(S) COI(O’) COI(O) where PR(S) is the set of previously read
objects by S.

3. O is a sanitized object.

Subject affects:
a. Once a subject reads any object in a COI class, the only other objects that the subject can
read in that class are the same objects, i.e. once one object is read, no other objects in another
class can be read.

b. The minimum number of subjects needed to access each object in a class is the number of
objects in that class.

CW-*-Property :

A subject S may write to an object O iff both of the following conditions hold

1. The CW-Simple security conditions permits S to read O

2. unsantized objects O’, S can read O’ CD(O’) = CD(O)

This prevents one subject from writing sensitive information in the shared common object
from an unshared object.

Clinical Information Systems Security Policy Electronic medical records present their own
requirements for policies that combine confidentiality and integrity.

Patient confidentiality, authentication of both records and those making entries in those
records, and assurance that the records have not been changed erroneously are most critical.
Def: A patient is the subject of medical records, or an agent for that person who can give
consent for the person to be treated.

Def: Personal health information (electronic medical record) is information about a patient’s
health or treatment enabling that patient to be identified.

Def: A clinician is a health-care professional who has access to personal health information
while performing his or her job.

Guided by the Clark-Wilson model, we have a set of principles that address electronic
medical records. Access to the electronic medical records must be restricted to the clinician
and the clinician’s practice group.

Access Principle 1: Each medical record has an access control list naming the individuals or
groups who may read and append information to the record. The system must restrict access
to those identified on the access control list. Medical ethics require that only clinicians and
the patient have access to the patient’s electronic medical records.

Access Principle 2: One of the clinicians on the access control list (called the responsible
clinician) must have the right to add other clinicians to the access control list. The patient
must consent to any treatment. Hence, the patient has the right to know when his or her
electronic medical records are accessed or altered. Also the electronic medical records system
must prevent the leakage of information. Hence, the patient must be notified when their
electronic medical records are accessed by a clinician that the patient does not know.

Access Principle 3: The responsible clinician must notify the patient of the names on the
access control list whenever the patient’s medical record is opened. Except in situations given
in statutes or in cases of emergency, the responsible clinician must obtain the patient’s
consent. Auditing who accesses the patient’s electronic medical records, when those records
were accessed, and what changes, if any, were made to the electronic medical records must
be recorded to adhere to numerous government medical information requirements.

Access Principle 4: The name of the clinician, the date, and the time of the access of a
medical record must be recorded. Similar information must be kept for deletions. The
following principles deal with record creation, and information deletion. New electronic
medical records should allow the attending clinician and the patient access to those records.
Additionally, the referring clinician, if any, should have access to those records to see the
results of any referral.

Creation Principle: A clinician may open a record, with the clinician and the patient on the
access control list. If the record is opened as a result of a referral, the referring clinician may
also be on the access control list. Electronic medical records should be kept the required
amount of time, normally 8 years except in some instances.

Deletion Principle: Clinical information cannot be deleted from a medical record until the
appropriate time has passed. When copying electronic medical records, care must be taken to
prevent the unauthorized disclosure of a patient’s medical records.

Confinement Principle: Information from one medical record may be appended to a


different medical record if and only if the access control list of the second record is a subset
of the access control list of the first. The combining of information from numerous authorized
sources may lead to new information that the clinician should not have access to. Also the
access to a wide set of medical records would make the individual clinician susceptible to
corruption or blackmail.

Aggregation Principle: Measures for preventing the aggregation of patient data must be
effective. In particular a patient must be notified if anyone is to be added to the access control
list for the patient’s record and if that person has access to a large number of medical records.
There must be system mechanisms implemented to enforce all of these principles.

Enforcement Principle: Any computer system that handles medical records must have a
subsystem that enforces the preceding principles. The effectiveness of this enforcement must
be subject to evaluation by independent auditors.
UNIT III DIGITAL SIGNATURE AND AUTHENTICATION

Digital Signature and Authentication Schemes :Digital signature-Digital


Signature Schemes and their Variants- Digital Signature Standards-
Authentication: Overview- Requirements Protocols - Applications -
Kerberos -X.509 Directory Services 83.

DIGITAL SIGNATURE:

A digital signature is a mathematical technique used to validate the authenticity and integrity
of a digital document, message or software. It's the digital equivalent of a handwritten
signature or stamped seal, but it offers far more inherent security. A digital signature is
intended to solve the problem of tampering and impersonation in digital communications.

Digital signatures can provide evidence of origin, identity and status of electronic documents,
transactions or digital messages. Signers can also use them to acknowledge informed consent.
In many countries, including the U.S., digital signatures are considered legally binding in the
same way as traditional handwritten document signatures.

How do digital signatures work?

Digital signatures are based on public key cryptography, also known as asymmetric
cryptography. Using a public key algorithm -- such as Rivest-Shamir-Adleman, or RSA --
two keys are generated, creating a mathematically linked pair of keys: one private and one
public.

Digital signatures work through public key cryptography's two mutually authenticating
cryptographic keys. For encryption and decryption, the person who creates the digital
signature uses a private key to encrypt signature-related data. The only way to decrypt that
data is with the signer's public key.

If the recipient can't open the document with the signer's public key, that indicates there's a
problem with the document or the signature. This is how digital signatures are authenticated.

Digital certificates, also called public key certificates, are used to verify that the public key
belongs to the issuer. Digital certificates contain the public key, information about its owner,
expiration dates and the digital signature of the certificate's issuer. Digital certificates are
issued by trusted third-party certificate authorities (CAs), such as Docu Sign or Global Sign,
for example. The party sending the document and the person signing it must agree to use a
given CA.

Digital signature technology requires all parties trust that the person who creates the signature
image has kept the private key secret. If someone else has access to the private signing key,
that party could create fraudulent digital signatures in the name of the private key holder.

What are the benefits of digital signatures?

Digital signatures offer the following benefits:

 Security. Security capabilities are embedded in digital signatures to ensure a legal


document isn't altered and signatures are legitimate. Security features include asymmetric
cryptography, personal identification numbers (PINs), checksums and cyclic redundancy
checks (CRCs), as well as CA and trust service provider (TSP) validation.

 Timestamping. This provides the date and time of a digital signature and is useful when
timing is critical, such as for stock trades, lottery ticket issuance and legal proceedings.

 Globally accepted and legally compliant. The public key infrastructure (PKI) standard
ensures vendor-generated keys are made and stored securely. With digital signatures
becoming an international standard, more countries are accepting them as legally binding.

 Time savings. Digital signatures simplify the time-consuming processes of physical


document signing, storage and exchange, enabling businesses to quickly access and sign
documents.

 Cost savings. Organizations can go paperless and save money previously spent on the
physical resources, time, personnel and office space used to manage and transport
documents.

 Positive environmental effects. Reducing paper use also cuts down on the physical waste
generated by paper and the negative environmental impact of transporting paper
documents.
 Traceability. Digital signatures create an audit trail that makes internal record-keeping
easier for businesses. With everything recorded and stored digitally, there are fewer
opportunities for a manual signee or record-keeper to make a mistake or misplace
something.
How do you create a digital signature?

To create a digital signature, signing software -- such as an email program -- is used to


provide a one-way hash of the electronic data to be signed.

A hash is a fixed-length string of letters and numbers generated by an algorithm. The digital
signature creator's private key is used to encrypt the hash. The encrypted hash -- along with
other information, such as the hashing algorithm -- is the digital signature.

The reason for encrypting the hash instead of the entire message or document is because a
hash function can convert an arbitrary input into a fixed-length value, which is usually much
shorter. This saves time, as hashing is much faster than signing.

The value of a hash is unique to the hashed data. Any change in the data -- even a
modification to a single character -- results in a different value. This attribute enables others
to use the signer's public key to decrypt the hash to validate the integrity of the data.

If the decrypted hash matches a second computed hash of the same data, it proves that the
data hasn't changed since it was signed. But, if the two hashes don't match, the data has either
been tampered with in some way and is compromised or the signature was created with a
private key that doesn't correspond to the public key presented by the signer. This signals an
issue with authentication.
 The first step would be for you to type out the message or ready the file you want to
send. Your private key would work as the stamp for this file. It could be a code or a
password. Then you press send and the email reaches ABC Office via the internet.
 In the second step, the ABC Office would receive your file and verify your signature
using your public key. They will then be able to access the encrypted file.
 The final step would require the ABC Office to use the private key that you’ve shared,
to reveal whatever file you’ve mailed them. If the recipient doesn’t have your private
key, they won’t be able to access the information in the document.

A digital signature can be used with any kind of message, whether or not it's encrypted,
simply so the receiver can be sure of the sender's identity and that the message arrived intact.
Digital signatures make it difficult for the signer to deny having signed something, as the
digital signature is unique to both the document and the signer and it binds them together.
This property is called nonrepudiation.

The digital certificate is the electronic document that contains the digital signature of the
issuing CA. It's what binds together a public key with an identity and can be used to verify
that a public key belongs to a particular person or entity. Most modern email programs
support the use of digital signatures and digital certificates, making it easy to sign any
outgoing emails and validate digitally signed incoming messages.

Digital signatures are also used extensively to provide proof of authenticity, data integrity and
nonrepudiation of communications and transactions conducted over the internet.
Classes and types of digital signatures

There are three different classes of digital signature certificates (DSCs) as follows:

 Class 1. This type of DSC can't be used for legal business documents, as they're validated
based only on an email ID and username. Class 1 signatures provide a basic level of
security and are used in environments with a low risk of data compromise.

 Class 2. These DSCs are often used for electronic filing (e-filing) of tax documents,
including income tax returns and goods and services tax returns. Class 2 digital signatures
authenticate a signer's identity against a pre-verified database. Class 2 digital signatures
are used in environments where the risks and consequences of data compromise are
moderate.

 Class 3. The highest level of digital signatures, Class 3 signatures require people or
organizations to present in front of a CA to prove their identity before signing. Class 3
digital signatures are used for e-auctions, e-tendering, e-ticketing and court filings, as well
as in other environments where threats to data or the consequences of a security failure are
high.
Uses for digital signatures

Digital signature tools and services are commonly used in contract-heavy industries,
including the following:

 Government. The U.S. Government Publishing Office publishes electronic versions of


budgets, public and private laws, and congressional bills with digital signatures.
Governments worldwide use digital signatures for processing tax returns, verifying
business-to-government transactions, ratifying laws and managing contracts. Most
government entities must adhere to strict laws, regulations and standards when using
digital signatures. Many governments and corporations also use smart cards to identify
their citizens and employees. These are physical cards with an embedded chip that
contains a digital signature that provides the cardholder access to an institution's systems
or physical buildings.

 Healthcare. Digital signatures are used in the healthcare industry to improve the
efficiency of treatment and administrative processes, strengthen data security, e-prescribe
and process hospital admissions. The use of digital signatures in healthcare must comply
with the Health Insurance Portability and Accountability Act of 1996.
 Manufacturing. Manufacturing companies use digital signatures to speed up processes,
including product design, quality assurance, manufacturing enhancements, marketing and
sales. The use of digital signatures in manufacturing is governed by the International
Organization for Standardization and the National Institute of Standards and
Technology Digital Manufacturing Certificate.

 Financial services. The U.S. financial sector uses digital signatures for contracts,
paperless banking, loan processing, insurance documentation and mortgages. This heavily
regulated sector uses digital signatures, paying careful attention to the regulations and
guidance put forth by the Electronic Signatures in Global and National Commerce Act (E-
Sign Act), state Uniform Electronic Transactions Act regulations, the Consumer Financial
Protection Bureau and the Federal Financial Institutions Examination Council.

 Cryptocurrencies. Bitcoin and other cryptocurrencies use digital signatures to


authenticate the blockchain. They're also used to manage transaction data associated with
cryptocurrency and as a way for users to show ownership of currency or their participation
in a transaction.

 Non-fungible tokens (NFTs). Digital signatures are used with digital assets -- such as
artwork, music and videos -- to secure and trace these types of NFTs anywhere on the
blockchain.

Digital signature security:

Security is the main benefit of using digital signatures. Security features and methods used in
digital signatures include the following:

 PINs, passwords and codes. These are used to authenticate and verify a signer's identity
and approve their signature. Email, username and password are the most common methods
used.

 Asymmetric cryptography. This employs a public key algorithm that includes private
and public key encryption and authentication.

 Checksum. This long string of letters and numbers is used to determine the authenticity of
transmitted data. A checksum is the result of running a cryptographic hash function on a
piece of data. The value of the original checksum file is compared against the checksum
value of the calculated file to detect errors or changes. A checksum acts like a data
fingerprint.

 CRC. A type of checksum, this error-detecting code and verification feature is used in
digital networks and storage devices to detect changes to raw data.

 CA validation. CAs issue digital signatures and act as trusted third parties by accepting,
authenticating, issuing and maintaining digital certificates. The use of CAs helps avoid the
creation of fake digital certificates.

 TSP validation. This person or legal entity validates a digital signature on a company's
behalf and offers signature validation reports.
Digital signature attacks

Possible attacks on digital signatures include the following:

 Chosen-message attack. The attacker either obtains the victim's public key or tricks the
victim into digitally signing a document they don't intend to sign.

 Known-message attack. The attacker obtains messages the victim sent and a key that
enables the attacker to forge the victim's signature on documents.

 Key-only attack. The attacker only has access to the victim's public key and can re-create
the victim's signature to digitally sign documents or messages that the victim doesn't
intend to sign.
Digital signature tools and vendors

There are numerous e-signature tools and technologies on the market, including the
following:

 Adobe Acrobat Sign is a cloud-based service that's designed to provide secure, legal e-
signatures across all device types. Adobe Acrobat Sign integrates with existing
applications, including Microsoft Office and Dropbox.

 DocuSign standards-based services ensure e-signatures are compliant with existing


regulations. Services include Express Signature for basic global transactions and EU
Qualified Signature, which complies with EU standards.

 Dropbox Sign helps users prepare, send, sign and track documents. Features of the tool
include embedded signing, custom branding and embedded templates. Dropbox Sign also
integrates with applications such as Microsoft Word, Slack and Box.
 GlobalSign provides a host of management, integration and automation tools to
implement PKI across enterprise environments.

 PandaDoc provides e-signature software that helps users upload, send and collect
payments for documents. Users can also track document status and receive notifications
when someone opens, views, comments on or signs a document.

 ReadySign from Onit provides users with customizable templates and forms for e-
signatures. Software features include bulk sending, notifications, reminders, custom
signatures and document management with role-based permissions.

 Signeasy offers an e-signing service of the same name to businesses and individuals, as
well as application programming interfaces for developers.

 SignNow, which is part of AirSlate Business Cloud, provides businesses with a PDF
signing tool.

Digital signature schemes:

Elgamal digital signature scheme:


Before examining the NIST Digital Signature Algorithm, it will be helpful to understand
the Elgamal and Schnorr signature schemes. Recall from Chapter 10, that the
Elgamal encryption scheme is designed to enable encryption by a user’s public key
with decryption by the user’s private key. The Elgamal signature scheme involves
the use of the private key for digital signature generation and the public key for
digital signature verification [ELGA84, ELGA85].
Before proceeding, we need a result from number theory. Recall from Chapter 2
that for a prime number q, if a is a primitive root of q, then
a, a2, c, aq-1
are distinct (mod q). It can be shown that, if a is a primitive root of q, then
1. For any integer m, am K 1 (mod q) if and only if m K 0 (mod q - 1).
2. For any integers, i, j, ai K aj (mod q) if and only if i K j (mod q - 1).
As with Elgamal encryption, the global elements of Elgamal digital signature
are a prime number q and a, which is a primitive root of q. User A generates
a private/public key pair as follows.
1. Generate a random integer XA, such that 1 6 XA 6 q - 1.
2. Compute YA = aXA mod q.
3. A’s private key is XA; A’s pubic key is {q, a, YA}.
To sign a message M, user A first computes the hash m = H(M), such that
m is an integer in the range 0 … m … q - 1. A then forms a digital signature as
follows.
1. Choose a random integer K such that 1 … K … q - 1 and gcd(K, q - 1) = 1.
That is, K is relatively prime to q - 1.
2. Compute S1 = aK mod q. Note that this is the same as the computation of C1
for Elgamal encryption.
3. Compute K-1 mod (q - 1). That is, compute the inverse of K modulo q - 1.
4. Compute S2 = K-1(m - XAS1) mod (q - 1).
5. The signature consists of the pair (S1, S2).

Any user B can verify the signature as follows.


1. Compute V1 = am mod q.
2. Compute V2 = (YA)S1(S1)S2 mod q.
The signature is valid if V1 = V2. Let us demonstrate that this is so. Assume
that the equality is true. Then we have
am mod q = (YA)S1(S1)S2 mod q assume V1 = V2
am mod q = aXAS1aKS2 mod q substituting for YA and S1
am-XAS1 mod q = aKS2 mod q rearranging terms
m - XAS1 K KS2 mod (q - 1) property of primitive roots
m - XAS1 K KK-1 (m - XAS1) mod (q - 1) substituting for S2
For example, let us start with the prime field GF(19); that is, q = 19. It has
primitive roots {2, 3, 10, 13, 14, 15}, as shown in Table 2.7. We choose a = 10.
Alice generates a key pair as follows:
1. Alice chooses XA = 16.
2. Then YA = aXA mod q = a16 mod 19 = 4.
3. Alice’s private key is 16; Alice’s pubic key is {q, a, YA} = {19, 10, 4}.
Suppose Alice wants to sign a message with hash value m = 14.
1. Alice chooses K = 5, which is relatively prime to q - 1 = 18.
2. S1 = aK mod q = 105 mod 19 = 3 (see Table 2.7).
3. K-1 mod (q - 1) = 5-1 mod 18 = 11.
4. S2 = K-1 (m - XAS1) mod (q - 1) = 11 (14 - (16)(3)) mod 18 = -374
mod 18 = 4.
Bob can verify the signature as follows.
1. V1 = am mod q = 1014 mod 19 = 16.
2. V2 = (YA)S1(S1)S2 mod q = (43)(34) mod 19 = 5184 mod 19 = 16.
Thus, the signature is valid because V1 = V2.

schnorr digital signature scheme:

As with the Elgamal digital signature scheme, the Schnorr signature scheme is
based on discrete logarithms [SCHN89, SCHN91]. The Schnorr scheme minimizes
the message-dependent amount of computation required to generate a signature.
The main work for signature generation does not depend on the message and can
be done during the idle time of the processor. The message-dependent part of the
signature generation requires multiplying a 2n-bit integer with an n-bit integer.
The scheme is based on using a prime modulus p, with p - 1 having a prime
factor q of appropriate size; that is, p - 1 K 0 (mod q). Typically, we use p ≈ 21024
and q ≈ 2160. Thus, p is a 1024-bit number, and q is a 160-bit number, which is also
the length of the SHA-1 hash value.

The first part of this scheme is the generation of a private/public key pair,
which consists of the following steps.
1. Choose primes p and q, such that q is a prime factor of p - 1.
2. Choose an integer a, such that aq = 1 mod p. The values a, p, and q comprise a
global public key that can be common to a group of users.
3. Choose a random integer s with 0 6 s 6 q. This is the user’s private key.
4. Calculate v = a-s mod p. This is the user’s public key.
A user with private key s and public key v generates a signature as follows.
1. Choose a random integer r with 0 6 r 6 q and compute x = ar mod p. This
computation is a preprocessing stage independent of the message M to be
signed.
2. Concatenate the message with x and hash the result to compute the value e:
e = H(M} x)
3. Compute y = (r + se) mod q. The signature consists of the pair (e, y).
Any other user can verify the signature as follows.
1. Compute x′ = ayve mod p.
2. Verify that e = H (M} x′).
To see that the verification works, observe that
x′ K ayve K aya-se K ay-se K ar K x (mod p)
Hence, H (M} x′) = H (M} x).

NIST Digital Signature Algorithm:


The National Institute of Standards and Technology (NIST) has published

Federal Information Processing Standard FIPS 186, known as the Digital

Signature Algorithm (DSA). The DSA makes use of the Secure Hash Algorithm

(SHA) described in Chapter 12. The DSA was originally proposed in 1991 and

revised in 1993 in response to public feedback concerning the security of the

scheme. There was a further minor revision in 1996. In 2000, an expanded version

of the standard was issued as FIPS 186-2, subsequently updated to FIPS 186-3 in

2009, and FIPS 186-4 in 2013. This latest version also incorporates digital signature

algorithms based on RSA and on elliptic curve cryptography. In this section,

we discuss DSA.

The DSA Approach:

The DSA uses an algorithm that is designed to provide only the digital signature

function. Unlike RSA, it cannot be used for encryption or key exchange.

Nevertheless, it is a public-key technique.

contrasts the DSA approach for generating digital signatures to


that used with RSA. In the RSA approach, the message to be signed is input to a
hash function that produces a secure hash code of fixed length. This hash code is
then encrypted using the sender’s private key to form the signature. Both the message
and the signature are then transmitted. The recipient takes the message and
produces a hash code. The recipient also decrypts the signature using the sender’s
public key. If the calculated hash code matches the decrypted signature, the signature
is accepted as valid. Because only the sender knows the private key, only the
sender could have produced a valid signature.
The DSA approach also makes use of a hash function. The hash code is provided
as input to a signature function along with a random number k generated for
this particular signature. The signature function also depends on the sender’s private
key (PRa) and a set of parameters known to a group of communicating principals.
We can consider this set to constitute a global public key (PUG).1 The result is a
signature consisting of two components, labeled s and r.
At the receiving end, the hash code of the incoming message is generated. The
hash code and the signature are inputs to a verification function. The verification
function also depends on the global public key as well as the sender’s public key
(PUa), which is paired with the sender’s private key. The output of the verification
function is a value that is equal to the signature component r if the signature is valid.
The signature function is such that only the sender, with knowledge of the private
key, could have produced the valid signature.
We turn now to the details of the algorithm.

The Digital Signature Algorithm:


DSA is based on the difficulty of computing discrete logarithms (see Chapter 2)
and is based on schemes originally presented by Elgamal [ELGA85] and Schnorr
[SCHN91].
Figure 13.3 summarizes the algorithm. There are three parameters that are
public and can be common to a group of users. An N-bit prime number q is chosen.
Next, a prime number p is selected with a length between 512 and 1024 bits such
that q divides (p - 1). Finally, g is chosen to be of the form h(p-1)/q mod p, where h
is an integer between 1 and (p - 1) with the restriction that g must be greater
than 1.2 Thus, the global public-key components of DSA are the same as in the
Schnorr signature scheme.
With these parameters in hand, each user selects a private key and generates
a public key. The private key x must be a number from 1 to (q - 1) and should
be chosen randomly or pseudorandomly. The public key is calculated from the
private key as y = gx mod p. The calculation of y given x is relatively straightforward.
However, given the public key y, it is believed to be computationally
infeasible to determine x, which is the discrete logarithm of y to the base g, mod p

The signature of a message M consists of the pair of numbers r and s, which are
functions of the public key components (p, q, g), the user’s private key (x), the hash
code of the message H(M), and an additional integer k that should be generated
randomly or pseudorandomly and be unique for each signing.
Let M, r′, and s′ be the received versions of M, r, and s, respectively.
Verification is performed using the formulas shown in Figure 13.3. The receiver
generates a quantity v that is a function of the public key components, the sender’s
public key, the hash code of the incoming message, and the received versions of r
and s. If this quantity matches the r component of the signature, then the signature
is validated.

The structure of the algorithm, as revealed in Figure 13.4, is quite interesting.


Note that the test at the end is on the value r, which does not depend on the message
at all. Instead, r is a function of k and the three global public-key components.
The multiplicative inverse of k (mod q) is passed to a function that also has as inputs
the message hash code and the user’s private key. The structure of this function is
such that the receiver can recover r using the incoming message and signature, the
public key of the user, and the global public key. It is certainly not obvious from
Figure 13.3 or Figure 13.4 that such a scheme would work. A proof is provided in
Appendix K.
Given the difficulty of taking discrete logarithms, it is infeasible for an
opponent to recover k from r or to recover x from s.
Another point worth noting is that the only computationally demanding
task in signature generation is the exponential calculation gk mod p. Because this
value does not depend on the message to be signed, it can be computed ahead of
time. Indeed, a user could precalculate a number of values of r to be used to sign
documents as needed. The only other somewhat demanding task is the determination
of a multiplicative inverse, k-1. Again, a number of these values can be
precalculated.

Elliptic Curve Digital Signature Algorithm:


As was mentioned, the 2009 version of FIPS 186 includes a new digital signature
technique based on elliptic curve cryptography, known as the Elliptic Curve Digital
Signature Algorithm (ECDSA). ECDSA is enjoying increasing acceptance due
to the efficiency advantage of elliptic curve cryptography, which yields security
comparable to that of other schemes with a smaller key bit length.
First we give a brief overview of the process involved in ECDSA. In essence,
four elements are involved.
1. All those participating in the digital signature scheme use the same global domain
parameters, which define an elliptic curve and a point of origin on the curve.
2. A signer must first generate a public, private key pair. For the private key, the
signer selects a random or pseudorandom number. Using that random number
and the point of origin, the signer computes another point on the elliptic curve.
This is the signer’s public key.
3. A hash value is generated for the message to be signed. Using the private
key, the domain parameters, and the hash value, a signature is generated. The
signature consists of two integers, r and s.
4. To verify the signature, the verifier uses as input the signer’s public key, the
domain parameters, and the integer s. The output is a value v that is compared
to r. The signature is verified if v = r.
Let us examine each of these four elements in turn.

Global Domain Parameters


Recall from Chapter 10 that two families of elliptic curves are used in cryptographic
applications: prime curves over Zp and binary curves over GF(2m). For ECDSA,
prime curves are used. The global domain parameters for ECDSA are the following:
q a prime number
a, b integers that specify the elliptic curve equation defined over Zq with the
equation y2 = x3 + ax + b
G a base point represented by G = (xg, yg) on the elliptic curve equation
n order of point G; that is, n is the smallest positive integer such that
nG = O. This is also the number of points on the curve.
Key Generation
Each signer must generate a pair of keys, one private and one public. The signer,
let us call him Bob, generates the two keys using the following steps:
1. Select a random integer d, d ∈ [1, n - 1]
2. Compute Q = dG. This is a point in Eq(a, b)
3. Bob’s public key is Q and private key is d.

Digital Signature Generation and Authentication


With the public domain parameters and a private key in hand, Bob generates
a digital signature of 320 bytes for message m using the following steps:
1. Select a random or pseudorandom integer k, k ∈ [1, n - 1]
2. Compute point P = (x, y) = kG and r = x mod n. If r = 0 then goto step 1
3. Compute t = k-1 mod n
4. Compute e = H(m), where H is one of the SHA-2 or SHA-3 hash functions.
5. Compute s = k-1(e + dr) mod n. If s = O then goto step 1
6. The signature of message m is the pair (r, s).
Alice knows the public domain parameters and Bob’s public key. Alice is
presented with Bob’s message and digital signature and verifies the signature using
the following steps:
1. Verify that r and s are integers in the range 1 through n - 1
2. Using SHA, compute the 160-bit hash value e = H(m)
3. Compute w = s-1 mod n
4. Compute u1 = ew and u2 = rw
5. Compute the point X = (x1, y1) = u1G + u2Q
6. If X = O, reject the signature else compute v = x1 mod n
7. Accept Bob’s signature if and only if v = r
process is valid as follows. If the message received by Alice is in fact signed by
Bob, then
s = k-1(e + dr) mod n
Then
k = s-1(e + dr) mod n
k = (s-1e + s-1dr) mod n
k = (we + wdr) mod n
k = (u1 + u2d) mod n
Now consider that
u1G + u2Q = u1G + u2dG = (u1 + u2d)G = kG

In step 6 of the verification process, we have v = x1 mod n, where point


X = (x1, y1) = u1G + u2Q. Thus we see that v = r since r = x mod n and x is the x
coordinate of the point kG and we have already seen that u1G + u2Q = kG.

RSA-PSS Digital Signature Algorithm:


In addition to the NIST Digital Signature Algorithm and ECDSA, the 2009 version
of FIPS 186 also includes several techniques based on RSA, all of which were developed
by RSA Laboratories and are in wide use. A worked-out example, using RSA,
is available at this book’s Web site.
In this section, we discuss the RSA Probabilistic Signature Scheme (RSA-PSS),
which is the latest of the RSA schemes and the one that RSA Laboratories recommends
as the most secure of the RSA schemes.
Because the RSA-based schemes are widely deployed in many applications,
including financial applications, there has been great interest in demonstrating that
such schemes are secure. The three main RSA signature schemes differ mainly in
the padding format the signature generation operation employs to embed the hash
value into a message representative, and in how the signature verification operation
determines that the hash value and the message representative are consistent.
For all of the schemes developed prior to PSS, it has not been possible to develop
a mathematical proof that the signature scheme is as secure as the underlying RSA
encryption/decryption primitive [KALI01]. The PSS approach was first proposed by
Bellare and Rogaway [BELL96c, BELL98]. This approach, unlike the other RSAbased
schemes, introduces a randomization process that enables the security of the
method to be shown to be closely related to the security of the RSA algorithm itself.
This makes RSA-PSS more desirable as the choice for RSA-based digital signature
applications.
Mask Generation Function
Before explaining the RSA-PSS operation, we need to describe the mask generation
function (MGF) used as a building block. MGF(X, maskLen) is a pseudorandom
function that has as input parameters a bit string X of any length and the
desired length L in octets of the output. MGFs are typically based on a secure
cryptographic hash function such as SHA-1. An MGF based on a hash function is
intended to be a cryptographically secure way of generating a message digest, or
hash, of variable length based on an underlying cryptographic hash function that
produces a fixed-length output.
The MGF function used in the current specification for RSA-PSS is MGF1,
with the following parameters:
Options Hash hash function with output hLen octets
Input X octet string to be masked
maskLen length in octets of the mask
Output mask an octet string of length maskLen
MGF1 is defined as follows:
1. Initialize variables.
T = empty string
k = <maskLen/hLen= - 1
2. Calculate intermediate values.
for counter = 0 to k
Represent counter as a 32-bit string C
T = T } Hash(X } C)
3. Output results.
mask = the leading maskLen octets of T
In essence, MGF1 does the following. If the length of the desired output is
equal to the length of the hash value (maskLen = hLen), then the output is the
hash of the input value X concatenated with a 32-bit counter value of 0. If maskLen
is greater than hLen, the MGF1 keeps iterating by hashing X concatenated with the
counter and appending that to the current string T. So that the output is
Hash (X} 0) } Hash(X} 1)} c }Hash(X} k)
This is repeated until the length of T is greater than or equal to maskLen, at which
point the output is the first maskLen octets of T.
The Signing Operation
MESSAGE ENCODING The first stage in generating an RSA-PSS signature of a message
M is to generate from M a fixed-length message digest, called an encoded message
(EM). Figure 13.6 illustrates this process. We define the following parameters and
functions:
Options Hash hash function with output hLen octets. The current
preferred alternative is SHA-1, which produces a 20-octet
hash value.
MGF mask generation function. The current specification calls
for MGF1.
sLen length in octets of the salt. Typically sLen = hLen, which
for the current version is 20 octets.
Input M message to be encoded for signing.
emBits This value is one less than the length in bits of the RSA
modulus n.
Output EM encoded message. This is the message digest that will be
encrypted to form the digital signature.
Parameters emLen length of EM in octets = <emBits/8= .
padding1 hexadecimal string 00 00 00 00 00 00 00 00; that is, a string
of 64 zero bits.
padding2 hexadecimal string of 00 octets with a length
(emLen - sLen - hLen - 2) octets, followed by the
hexadecimal octet with value 01.
salt a pseudorandom number.
bc the hexadecimal value BC.
The encoding process consists of the following steps.
1. Generate the hash value of M: mHash = Hash(M)
2. Generate a pseudorandom octet string salt and form block M′ = padding1 }
mHash} salt
3. Generate the hash value of M′: H = Hash(M′)
4. Form data block DB = padding2 } salt
5. Calculate the MGF value of H: dbMask = MGF(H, emLen - hLen - 1)
6. Calculate maskedDB = DB ⊕dbMsk
7. Set the leftmost 8emLen - emBits bits of the leftmost octet in maskedDB to 0
8. EM = maskedDB }H} 0xbc
We make several comments about the complex nature of this message
digest algorithm. All of the RSA-based standardized digital signature schemes
involve appending one or more constants (e.g., padding1 and padding2) in the
process of forming the message digest. The objective is to make it more difficult
for an adversary to find another message that maps to the same message digest
as a given message or to find two messages that map to the same message digest.
RSA-PSS also incorporates a pseudorandom number, namely the salt. Because the
salt changes with every use, signing the same message twice using the same private
key will yield two different signatures. This is an added measure of security.
FORMING THE SIGNATURE We now show how the signature is formed by a signer
with private key {d, n} and public key {e, n} (see Figure 9.5). Treat the octet string
EM as an unsigned, nonnegative binary integer m. The signature s is formed by
encrypting m as follows:
s = md mod n
Let k be the length in octets of the RSA modulus n. For example if the key size
for RSA is 2048 bits, then k = 2048/8 = 256. Then convert the signature value s
into the octet string S of length k octets.
Signature Verification
DECRYPTION For signature verification, treat the signature S as an unsigned,
nonnegative binary integer s. The message digest m is recovered by decrypting s as
follows:
m = se mod n
Then, convert the message representative m to an encoded message EM of
length emLen = <(modBits - 1)/8= octets, where modBits is the length in bits of
the RSA modulus n.
EM VERIFICATION EM verification can be described as follows:
Options Hash hash function with output hLen octets.
MGF mask generation function.
sLen length in octets of the salt.
Input M message to be verified.
EM the octet string representing the decrypted signature,
with length emLen = <emBits/8=.
emBits This value is one less than the length in bits of the RSA
modulus n.
Parameters padding1 hexadecimal string 00 00 00 00 00 00 00 00; that is,
a string of 64 zero bits.
padding2 hexadecimal string of 00 octets with a length
(emLen - sLen - hLen - 2) octets, followed by the
hexadecimal octet with value 01.
1. Generate the hash value of M: mHash = Hash(M)
2. If emLen 6 hLen + sLen + 2, output “inconsistent” and stop
3. If the rightmost octet of EM does not have hexadecimal value BC, output
“ inconsistent” and stop
4. Let maskedDB be the leftmost emLen - hLen - 1 octets of EM, and let H be
the next hLen octets
5. If the leftmost 8emLen - emBits bits of the leftmost octet in maskedDB are
not all equal to zero, output “inconsistent” and stop
6. Calculate dbMask = MGF (H, emLen - hLen - 1)
7. Calculate DB = maskedDB ⊕dbMsk
8. Set the leftmost 8emLen - emBits bits of the leftmost octet in DB to zero
9. If the leftmost (emLen - hLen - sLen - 1) octets of DB are not equal to
padding2, output “inconsistent” and stop
10. Let salt be the last sLen octets of DB
11. Form block M′ = padding1 }mHash} salt
12. Generate the hash value of M′: H′ = Hash(M′)
13. If H = H′, output “consistent.” Otherwise, output “inconsistent”
Figure 13.7 illustrates the process. The shaded boxes labeled H and H′ correspond,
respectively, to the value contained in the decrypted signature and the
value generated from the message M associated with the signature. The remaining
three shaded areas contain values generated from the decrypted signature and compared
to known constants. We can now see more clearly the different roles played
by the constants and the pseudorandom value salt, all of which are embedded in the

EM generated by the signer. The constants are known to the verifier, so that the
computed constants can be compared to the known constants as an additional check
that the signature is valid (in addition to comparing H and H′). The salt results in a
different signature every time a given message is signed with the same private key.
The verifier does not know the value of the salt and does not attempt a comparison.
Thus, the salt plays a similar role to the pseudorandom variable k in the NIST DSA
and in ECDSA. In both of those schemes, k is a pseudorandom number generated by
the signer, resulting in different signatures from multiple signings of the same message
with the same private key. A verifier does not and need not know the value of k.

Digital Signature Standard (DSS)


As we have studied, signature is a way of authenticating the data coming from a trusted
individual. Similarly, digital signature is a way of authenticating a digital data coming from
a trusted source. Digital Signature Standard (DSS) is a Federal Information Processing
Standard(FIPS) which defines algorithms that are used to generate digital signatures with
the help of Secure Hash Algorithm(SHA) for the authentication of electronic documents.
DSS only provides us with the digital signature function and not with any encryption or key
exchanging strategies.

Sender Side : In DSS Approach, a hash code is generated out of the message and following
inputs are given to the signature function –
1. The hash code.
2. The random number ‘k’ generated for that particular signature.
3. The private key of the sender i.e., PR(a).
4. A global public key(which is a set of parameters for the communicating principles) i.e.,
PU(g).
These input to the function will provide us with the output signature containing two
components – ‘s’ and ‘r’. Therefore, the original message concatenated with the signature is
sent to the receiver. Receiver Side : At the receiver end, verification of the sender is done.
The hash code of the sent message is generated. There is a verification function which takes
the following inputs –
1. The hash code generated by the receiver.
2. Signature components ‘s’ and ‘r’.
3. Public key of the sender.
4. Global public key.
The output of the verification function is compared with the signature component ‘r’. Both
the values will match if the sent signature is valid because only the sender with the help of
it private key can generate a valid signature.
Benefits of advanced signature:
1.A computerized signature gives better security in the exchange. Any unapproved
individual can’t do fakeness in exchanges.
2.You can undoubtedly follow the situation with the archives on which the computerized
mark is applied.
3.High velocity up record conveyance.
4.It is 100 percent lawful it is given by the public authority approved ensuring authority.
5.In the event that you have marked a report carefully, you can’t deny it.
6.In this mark, When a record is get marked, date and time are consequently stepped on it.
7.It is preposterous to expect to duplicate or change the report marked carefully.
8.ID of the individual that signs.
9.Disposal of the chance of committing misrepresentation by a sham.

Authentication:
Authentication is the process of verifying the identity of a user or information. User
authentication is the process of verifying the identity of a user when that user logs in to a
computer system.
There are different types of authentication systems which are: –
1. Single-Factor authentication: – This was the first method of security that was developed.
On this authentication system, the user has to enter the username and the password to
confirm whether that user is logging in or not. Now if the username or password is wrong,
then the user will not be allowed to log in or access the system.
Advantage of the Single-Factor Authentication System: –
 It is a very simple to use and straightforward system.
 it is not at all costly.
 The user does not need any huge technical skills.
The disadvantage of the Single-Factor Authentication
 It is not at all password secure. It will depend on the strength of the password entered by
the user.
 The protection level in Single-Factor Authentication is much low.
2. Two-factor Authentication: – In this authentication system, the user has to give a
username, password, and other information. There are various types of authentication
systems that are used by the user for securing the system. Some of them are: – wireless
tokens and virtual tokens. OTP and more.
Advantages of the Two-Factor Authentication
 The Two-Factor Authentication System provides better security than the Single-factor
Authentication system.
 The productivity and flexibility increase in the two-factor authentication system.
 Two-Factor Authentication prevents the loss of trust.
Disadvantages of Two-Factor Authentication
 It is time-consuming.
3. Multi-Factor authentication system,: – In this type of authentication, more than one
factor of authentication is needed. This gives better security to the user. Any type of
keylogger or phishing attack will not be possible in a Multi-Factor Authentication system.
This assures the user, that the information will not get stolen from them.
The advantage of the Multi-Factor Authentication System are: –
 No risk of security.
 No information could get stolen.
 No risk of any key-logger activity.
 No risk of any data getting captured.
The disadvantage of the Multi-Factor Authentication System are: –
 It is time-consuming.
 it can rely on third parties. The main objective of authentication is to allow authorized
users to access the computer and to deny access to unauthorized users. Operating
Systems generally identify/authenticates users using the following 3 ways: Passwords,
Physical identification, and Biometrics. These are explained as following below.
1. Passwords: Password verification is the most popular and commonly used
authentication technique. A password is a secret text that is supposed to be known
only to a user. In a password-based system, each user is assigned a valid username
and password by the system administrator. The system stores all usernames and
Passwords. When a user logs in, their user name and password are verified by
comparing them with the stored login name and password. If the contents are the
same then the user is allowed to access the system otherwise it is rejected.
2. Physical Identification: This technique includes machine-readable badges(symbols),
cards, or smart cards. In some companies, badges are required for employees to gain
access to the organization’s gate. In many systems, identification is combined with
the use of a password i.e the user must insert the card and then supply his /her
password. This kind of authentication is commonly used with ATMs. Smart cards can
enhance this scheme by keeping the user password within the card itself. This allows
authentication without the storage of passwords in the computer system. The loss of
such a card can be dangerous.
3. Biometrics: This method of authentication is based on the unique biological
characteristics of each user such as fingerprints, voice or face recognition, signatures,
and eyes.
4. A scanner or other devices to gather the necessary data about the user.
5. Software to convert the data into a form that can be compared and stored.
6. A database that stores information for all authorized users.
7. Facial Characteristics – Humans are differentiated on the basis of facial
characteristics such as eyes, nose, lips, eyebrows, and chin shape.
8. Fingerprints – Fingerprints are believed to be unique across the entire human
population.
9. Hand Geometry – Hand geometry systems identify features of the hand that includes
the shape, length, and width of fingers.
10. Retinal pattern – It is concerned with the detailed structure of the eye.
11. Signature – Every individual has a unique style of handwriting, and this feature is
reflected in the signatures of a person.
12. Voice – This method records the frequency pattern of the voice of an individual
speaker.
Authentication Requirements:
Authentication Requirements In the context of communications across a network, the
following attacks can be identified:

1. Disclosure: Release of message contents to any person or process not possessing the
appropriate cryptographic key.

2. Traffic analysis: Discovery of the pattern of traffic between parties. In a


connectionoriented application, the frequency and duration of connections could be
determined. In either a connection-oriented or connectionless environment, the number and
length of messages between parties could be determined.

3. Masquerade: Insertion of messages into the network from a fraudulent source. This
includes the creation of messages by an opponent that are purported to come from an
authorized entity. Also included are fraudulent acknowledgments of message receipt or
nonreceipt by someone other than the message recipient.

4. Content Modification: Changes to the contents of a message, including insertion,


deletion, transposition, or modification.

5. Sequence modification: Any modification to a sequence of messages between parties,


including insertion, deletion, and reordering.

6. Timing modification: Delay or replay of messages. In a connection-orientated application,


an entire session or sequence of messages could be a replay of some previous valid session,
or individual messages in the sequence could be delayed or replayed.

7. Repudiation: Denial of receipt of message by destination or denial of transmission of


message by source.

Message authentication is a procedure to verify that received messages come from the
alleged source and have not been altered. Message authentication may also verify sequencing
and timeliness.

A digital signature is an authentication technique that also includes measures to counter


repudiation by either source or destination. Any message authentication or digital signature
mechanism can be viewed as having fundamentally two levels.

At the lower level, there must be some sort of function that Authentication Requirements
produces an authenticator: a value to be used to authenticate a message. This lowerlevel
function is then used as primitive in a higher-level authentication protocol that enables a
receiver to verify the authenticity of a message. This section is concerned with the types of
functions that may be used to produce an authenticator. These functions may be grouped into
three classes, as follows:

1. Message Encryption: The ciphertext of the entire message serves as its authenticator.
2. Message Authentication Code1 (MAC): A public function of the message and a secret
key that produces a fixed length value that serves as the authenticator.

3. Hash Functions: A public function that maps a message of any length into a fixed length
hash value, which serves as the authenticator. We will mainly be concerned with the last class
of function however it must be noted that hash functions and MACs are very similar except
that a hash code doesn’t require a secret key. With regard to the first class, this can be seen to
provide authentication by virtue of the fact that only the sender and receiver know the key.
Therefore the message could only have come from the sender. However there is also the
problem that the plaintext message should be recognisable as plaintext message (for example
if it was some sort of digitised X-rays it mightn’t be).

Authentication applications:
Authentication keeps invalid users out of databases, networks, and other resources. These
types of authentication use factors, a category of credential for verification, to confirm user
identity. Here are just a few authentication methods.

Single-Factor / Primary Authentication:

Historically the most common form of authentication, Single-Factor Authentication, is also


the least secure, as it only requires one factor to gain full system access. It could be a
username and password, pin-number or another simple code. While user-friendly, Single-
Factor authenticated systems are relatively easy to infiltrate by phishing, key logging, or mere
guessing. As there is no other authentication gate to get through, this approach is highly
vulnerable to attack.

Two-Factor Authentication (2FA):

 By adding a second factor for verification, two-factor authentication reinforces


security efforts. It is an added layer that essentially double-checks that a user is, in
reality, the user they’re attempting to log in as—making it much harder to break. With
this method, users enter their primary authentication credentials (like the
username/password mentioned above) and then must input a secondary piece of
identifying information.
 The secondary factor is usually more difficult, as it often requires something the valid
user would have access to, unrelated to the given system. Possible secondary factors
are a one-time password from an authenticator app, a phone number, or device that
can receive a push notification or SMS code, or a biometric like fingerprint (Touch
ID) or facial (Face ID) or voice recognition.
 2FA significantly minimizes the risk of system or resource compromise, as it’s
unlikely an invalid user would know or have access to both authentication factors.
While two-factor authentication is now more widely adopted for this reason, it does
cause some user inconvenience, which is still something to consider in
implementation.

Single Sign-On (SSO):

 With SSO, users only have to log in to one application and, in doing so, gain access to
many other applications. This method is more convenient for users, as it removes the
obligation to retain multiple sets of credentials and creates a more seamless
experience during operative sessions.
 Organizations can accomplish this by identifying a central domain (most ideally, an
IAM system) and then creating secure SSO links between resources. This process
allows domain-monitored user authentication and, with single sign-off, can ensure
that when valid users end their session, they successfully log out of all linked
resources and applications.

Multi-Factor Authentication (MFA):

 Multi-factor authentication is a high-assurance method, as it uses more system-


irrelevant factors to legitimize users. Like 2FA, MFA uses factors like biometrics,
device-based confirmation, additional passwords, and even location or behavior-based
information (e.g., keystroke pattern or typing speed) to confirm user identity.
However, the difference is that while 2FA always utilizes only two factors, MFA
could use two or three, with the ability to vary between sessions, adding an elusive
element for invalid users.
What are the most common authentication protocols?
 Authentication protocols are the designated rules for interaction and verification that
endpoints (laptops, desktops, phones, servers, etc.) or systems use to communicate.
For as many different applications that users need access to, there are just as many
standards and protocols. Selecting the right authentication protocol for your
organization is essential for ensuring secure operations and use compatibility. Here
are a few of the most commonly used authentication protocols.

Password Authentication Protocol (PAP):

While common, PAP is the least secure protocol for validating users, due mostly to its lack of
encryption. It is essentially a routine log in process that requires a username and password
combination to access a given system, which validates the provided credentials. It’s now
most often used as a last option when communicating between a server and desktop or remote
device.

Challenge Handshake Authentication Protocol (CHAP):

CHAP is an identity verification protocol that verifies a user to a given network with a higher
standard of encryption using a three-way exchange of a “secret.” First, the local router sends
a “challenge” to the remote host, which then sends a response with an MD5 hash function.
The router matches against its expected response (hash value), and depending on whether the
router determines a match, it establishes an authenticated connection—the “handshake”—or
denies access. It is inherently more secure than PAP, as the router can send a challenge at any
point during a session, and PAP only operates on the initial authentication approval.

Extensible Authentication Protocol (EAP):

This protocol supports many types of authentication, from one-time passwords to smart cards.
When used for wireless communications, EAP is the highest level of security as it allows a
given access point and remote device to perform mutual authentication with built-in
encryption. It connects users to the access point that requests credentials, confirms identity
via an authentication server, and then makes another request for an additional form of user
identification to again confirm via the server—completing the process with all messages
transmitted, encrypted.

Kerberos:

What is Kerberos Authentication?

It is a network authentication protocol that uses third-party authorization for validating user
profiles. It also employs symmetric key cryptography for plain-text encryption and cipher-
text decryption. The keys in cryptography consist of a secret key that shares confidential
information between two or more objects.

In short, it helps in maintaining the privacy of an organization. Now, since you have
understood what Kerberos is, you might be thinking why Kerberos. There are various
authorization protocols but Kerberos is an improved version among all. It really becomes
difficult for cybercriminals to break into the Kerberos authentication system. There will be
flaws in an organization that need to be managed by using Kerberos for defending itself from
cybercriminals. The tool is used by popular operating systems such as Windows, UNIX,
Linux, etc. With the use of the Kerberos authentication system, the internet has become a
more secure place.
Parameters of Kerberos:

There are three main parameters that are used in Kerberos. They are:

1. Client
2. Server
3. Key Distribution Center (KDC)

These three components act as a third-party authentication service.

It uses cryptography for maintaining mutual privacy by preventing the loss of packets while
transferring over the network.

Further, in this blog, we will try to understand how Kerberos works.

What is Kerberos used for?

Nowadays, Kerberos is used in every industry for maintaining a secure system to prevent
cybercrimes. The authentication protocols of it depend on regular auditing and various
authentication features. The two major goals of Kerberos are security and authentication.

Kerberos is used in email delivery systems, text messages, NFS, signaling, POSIX
authentication, and much more. It is also used in various networking protocols, such as
SMTP, POP, HTTP, etc. Further, it is used in client or server applications and in the
components of different operating systems to make them secure.

Kerberos working:

We have already discussed in the previous sections about Kerberos being an authentication
protocol. It has proved to be one of the essential components of client or server applications.
It is also used in various fields for network security and providing mutual authentication. In
this section, we will discuss how Kerberos works. For that, first, we need to know about
Kerberos’s components.
Components of Kerberos:

Kerberos mainly provides two services. They are:

 Authentication service
 Ticket-granting service

For providing these services, Kerberos uses its various components. Further, let us discuss the
following principal components that are used for authentication:

1. Client

The client helps to initiate a service request for communicating with the user.

2. Server

All the services that are required by the user are hosted by the server.

3. Authentication Server (AS)

As the name suggests, AS is used for the authentication of the client and the server. AS
assigns a ticket through Ticket Granting Ticket (TGT) to the client. The assigned ticket
ensures the authentication of the client to other servers.

4. Key Distribution Center (KDC)

There are three parts to the Kerberos authentication service:

 Database
 Ticket Granting Server (TGS)
 Authentication Server (AS)
These parts reside in a single unit known as the Key Distribution Center.

5. Ticket Granting Server (TGS):

This server provides a service to assign tickets to the user as a unique key for authentication.

There are unique keys that are used by the authentication server and the TGS for both clients
and servers. Now, let us look at the cryptographic secret keys that are used for authentication:

 Client or User Secret Key: It is the hash of the password set by the user that acts as
the client or user secret key.
 TGS Secret Key: It is the secret key that helps in deciding TGS.
 Server Secret Key: It helps to determine the server that provides the services.

Architecture of Kerberos:
The following steps are involved in the Kerberos workflow:

Step 1: Initially, there is an authentication request from the client. The user requests TGS
from the authentication server.

Step 2: After the client’s request, the client data is validated by the KDC. The authentication
server verifies the client and the TGS from the database. The authentication server then
generates a cryptographic key (SK1) after checking both values and implementing the hash of
the password. The authentication server also computes a session key. This session key uses
the secret key of the client (SK2) for encryption.

Step 3: The authentication server then creates a ticket that consists of the ID, network
address, secret key, and lifetime of the client.

Step 4: The decryption of the message is then performed by the client by using the client’s
secret key.

Step 5: Now, the client demands entrance into the server by using TGS. The TGS creates a
ticket that acts as an authenticator here.

Step 6: Another ticket is generated by KDC for the file server. Then, the TGS decrypts the
ticket for obtaining the secret key initiated by the client. It checks the network address and ID
by decrypting the authenticator. If the client ID and the network address match successfully,
then KDC shares a service key with the client and the server.

Step 7: The client utilizes the file ticket for authentication. The message is decrypted by
using SK1 to obtain SK2. Again, the TGS generates a new ticket to send to the target server.

Step 8: Here, the target server decrypts the file ticket by using the secret key. After that, the
server performs checks on the client details by decrypting SK2. The target server also checks
the validity of the ticket. Finally, when all of the client’s encrypted data is decrypted and
verified, the server authenticates the client to use the services.

Kerberos Limitations:

 Each network service must be modified individually for use with Kerberos
 It doesn’t work well in a timeshare environment
 Secured Kerberos Server
 Requires an always-on Kerberos server
 Stores all passwords are encrypted with a single key
 Assumes workstations are secure
 May result in cascading loss of trust.
 Scalability

Advantages of Kerberos Authentication:

1. Enhanced security

Authorization from third parties, multiple secret keys, and cryptography make Kerberos one
of the most reliable authentication protocols in the industry. When using Kerberos, passwords
for the users are never sent through the network. They are sent in an encrypted form and the
hidden keys move through the device. It becomes impossible to collect enough data to
impersonate a customer or service, even if someone is recording conversations.

2. Access control

It is a key part of the businesses of the day. The protocol enables the best access control. With
the help of this protocol, a business gets a single point for upholding safety protocols and
keeping login records.

3. Transparency and auditability

Transparent and accurate logs are important for auditing processes and inquiries. It clarifies
who was calling for what and at what moment for maintaining transparency.

4. Shared authentication

It allows users and service systems to authenticate each other. Users and server systems can
understand that they are communicating with valid partners at each stage of the
authentication process.

5. Limited-lifetime ticket

All tickets have serial numbers and lifelong data in the Kerberos model. Admins can monitor
the authorization time of the users. Short ticket lifetimes prove to be beneficial for avoiding
brute-force and repeat attacks.

6. Scalability

Several tech companies, including Apple, Microsoft, and Sun, have implemented the
Kerberos authentication system. This level of acceptance speaks volumes about the capability
of Kerberos to keep up with the needs of large companies.
7. Reusable authentications

The authentication of Kerberos is reusable and robust. Users need to verify devices with
Kerberos only once. They can verify network services for the lifespan of the ticket without
having to re-enter personal information.

X.509 Directory Services:


X.509 is a digital certificate that is built on top of a widely trusted standard known as ITU
or International Telecommunication Union X.509 standard, in which the format of PKI
certificates is defined. X.509 digital certificate is a certificate-based authentication security
framework that can be used for providing secure transaction processing and private
information. These are primarily used for handling the security and identity in computer
networking and internet-based communications.
Working of X.509 Authentication Service Certificate:
The core of the X.509 authentication service is the public key certificate connected to each
user. These user certificates are assumed to be produced by some trusted certification
authority and positioned in the directory by the user or the certified authority. These
directory servers are only used for providing an effortless reachable location for all users so
that they can acquire certificates. X.509 standard is built on an IDL known as ASN.1. With
the help of Abstract Syntax Notation, the X.509 certificate format uses an associated public
and private key pair for encrypting and decrypting a message.
Once an X.509 certificate is provided to a user by the certified authority, that certificate is
attached to it like an identity card. The chances of someone stealing it or losing it are less,
unlike other unsecured passwords. With the help of this analogy, it is easier to imagine how
this authentication works: the certificate is basically presented like an identity at the
resource that requires authentication.
Format of X.509 Authentication Service Certificate:

Generally, the certificate includes the elements given below:


 Version number: It defines the X.509 version that concerns the certificate.
 Serial number: It is the unique number that the certified authority issues.
 Signature Algorithm Identifier: This is the algorithm that is used for signing the
certificate.
 Issuer name: Tells about the X.500 name of the certified authority which signed and
created the certificate.
 Period of Validity: It defines the period for which the certificate is valid.
 Subject Name: Tells about the name of the user to whom this certificate has been
issued.
 Subject’s public key information: It defines the subject’s public key along with an
identifier of the algorithm for which this key is supposed to be used.
 Extension block: This field contains additional standard information.
 Signature: This field contains the hash code of all other fields which is encrypted by
the certified authority private key.

Applications of X.509 Authentication Service Certificate:


Many protocols depend on X.509 and it has many applications, some of them are given
below:
 Document signing and Digital signature
 Web server security with the help of Transport Layer Security (TLS)/Secure Sockets
Layer (SSL) certificates
 Email certificates
 Code signing
 Secure Shell Protocol (SSH) keys
 Digital Identities
UNIT IV E-MAIL AND IP SECURITY

E-mail and IP Security: Electronic mail security: Email Architecture -PGP


– Operational Descriptions- Key management- Trust Model- S/MIME.IP
Security: Overview-Architecture - ESP, AH Protocols IPSec Modes –
Security association - Key management.

E-mail:
Email stands for Electronic Mail. It is a method to sends messages from one computer to
another computer through the internet. It is mostly used in business, education, technical
communication, document interactions. It allows communicating with people all over the
world without bothering them. In 1971, a test email sent Ray Tomlinson to himself
containing text.
It is the information sent electronically between two of more people over a network. It
involves a sender and receiver/s.

History of Email:

The age of email services is older than ARPANET and the Internet. The early emails were
only sent to the same computer. Email services started in 1971 by Ray Tomlinson. He first
developed a system to send mail between users on different hosts across the ARPANET,
using @ sign with the destination server and was recognized as email.
Uses of Email:
 Email services are used in various sectors, organizations, either personally, or
among a large group of people. It provides an easy way to communicate with
individuals or groups by sending and receiving documents, images, links, and other
files. It also provides the flexibility of communicating with others on their own
schedule.
 Large or small companies can use email services to many employees, customers. A
company can send emails to many employees at a time. It becomes a professional
way to communicate. A newsletters service is also used to send company
advertisements, promotions, and other subscribed content to use advertisements,
promotions.

Types of Email

Newsletters
 It is a type of email sent by an individual or company to the subscriber. It contains
an advertisement, product promotion, updates regarding the organization, and
marketing content. It might be upcoming events, seminars, webinars from the
organization.
On boarding emails
 It is an email a user receives right after subscription. These emails are sent to buyers
to familiarize and tell them about to use a product. It also contains details about the
journey in the new organization.
Transactional
 These types of emails might contain invoices for recent transactions, details about
transactions. If transactions failed then details about when the amount will be
reverted. We can say that transaction emails are confirmation of purchase.
Plain-Text Emails
 These types of emails contain just simple text similar to other text message services.
It does not include images, videos, documents, graphics, or any attachments. Plain-
text emails are also used to send casual chatting like other text message services.
Advantages of Email Services
 These are the following advantages of email services:
Easy and Fast:
 Composing an email is very simple and one of the fast ways to communicate. We
can send an email within a minute just by clicking the mouse. It contains a
minimum lag time and can be exchanged quickly.
Secure:
 Email services are a secure and reliable method to receive and send information.
The feature of spam provides more security because a user can easily eliminate
malicious content.
Mass Sending:
 We can easily send a message to many people at a time through email. Suppose, a
company wants to send holiday information to all employees than using email, it
can be done easily. The feature of mail merge in MS Word provides more options to
send messages to many people just by exchanging relevant information.
Multimedia Email:
 Email offers to send multimedia, documents, images, audio files, videos, and
various types of files. We can easily attach the types of files in the original format or
compressed format.

Disadvantages of Email Services

Malicious Use:
 Anyone can send an email just by knowing their email address. An anonymous user
or unauthorized person can send an email if they have an email address. The
attachment feature of the email be the major disadvantage of it, hackers can send
viruses through email because sometimes the spam feature unable to classify
suspicious emails.
Spam:
 In days email services improve this feature. To improve this feature sometimes
some important email is transferred into spam without any notification.
Time Consuming:
 Responding through an email takes more time rather than other message services
like WhatsApp, Telegram, etc. Email is good for professional discussion but not
good for casual chatting.

Popular Email Services

 These are some popular email services:


Gmail:
 Gmail is the world’s most used email service provided by Google. Today, more than
1.5 billion active users worldwide. It is a web-based email service available on
various devices. It supported email clients via the POP and IMAP protocols. Gmail
is like other mail services, we can send and receive emails, block spam, create an
address book, and perform other basic email tasks.
 To access Gmail, we need a Google account. It is just like many services offered by
Google by registered users. The creation of a new Google account is free for
everyone. The domain of Gmail is [email protected] where Xyz is your unique
username.
Outlook:
 Outlook is also a popular webmail service founded in 1996 by Sabeer Bhatia and
Jack Smith as Hotmail later in 1997 acquired by Microsoft. It is older than Gmail.
Like other webmail services, Outlook supports Chrome, Firefox, Safari, and later
versions of Internet Explorer. It has more features like keyboard controls ability that
provide more facility to the users.
 To access Outlook, we need an Outlook account. It is free to create an account and
sending mail. It also operates other Microsoft applications like Microsoft Word,
Power BI, etc. The domain of Outlook is [email protected] where the abc is our
unique username.

Email Architecture:
Electronic mail, commonly known as email, is a method of exchanging messages over the
internet. Here are the basics of email:

1. An email address: This is a unique identifier for each user, typically in the format of
[email protected].
2. An email client: This is a software program used to send, receive and manage emails,
such as Gmail, Outlook, or Apple Mail.
3. An email server: This is a computer system responsible for storing and forwarding
emails to their intended recipients.

To send an email:

1. Compose a new message in your email client.


2. Enter the recipient’s email address in the “To” field.
3. Add a subject line to summarize the content of the message.
4. Write the body of the message.
5. Attach any relevant files if needed.
6. Click “Send” to deliver the message to the recipient’s email server.
7. Emails can also include features such as cc (carbon copy) and bcc (blind carbon copy)
to send copies of the message to multiple recipients, and reply, reply all, and forward
options to manage the conversation.
Electronic Mail (e-mail) is one of most widely used services of Internet. This service
allows an Internet user to send a message in formatted manner (mail) to the other Internet
user in any part of world. Message in mail not only contain text, but it also contains images,
audio and videos data. The person who is sending mail is called sender and person who
receives mail is called recipient. It is just like postal mail service. Components of E-Mail
System : The basic components of an email system are : User Agent (UA), Message
Transfer Agent (MTA), Mail Box, and Spool file. These are explained as following below.
1. User Agent (UA) : The UA is normally a program which is used to send and receive
mail. Sometimes, it is called as mail reader. It accepts variety of commands for
composing, receiving and replying to messages as well as for manipulation of the
mailboxes.
2. Message Transfer Agent (MTA) : MTA is actually responsible for transfer of mail
from one system to another. To send a mail, a system must have client MTA and system
MTA. It transfer mail to mailboxes of recipients if they are connected in the same
machine. It delivers mail to peer MTA if destination mailbox is in another machine. The
delivery from one MTA to another MTA is done by Simple Mail Transfer Protocol.

1. Mailbox : It is a file on local hard drive to collect mails. Delivered mails are present in
this file. The user can read it delete it according to his/her requirement. To use e-mail
system each user must have a mailbox . Access to mailbox is only to owner of mailbox.
2. Spool file : This file contains mails that are to be sent. User agent appends outgoing
mails in this file using SMTP. MTA extracts pending mail from spool file for their
delivery. E-mail allows one name, an alias, to represent several different e-mail
addresses. It is known as mailing list, Whenever user have to sent a message, system
checks recipient’s name against alias database. If mailing list is present for defined
alias, separate messages, one for each entry in the list, must be prepared and handed to
MTA. If for defined alias, there is no such mailing list is present, name itself becomes
naming address and a single message is delivered to mail transfer entity.
Services provided by E-mail system :
 Composition – The composition refer to process that creates messages and answers.
For composition any kind of text editor can be used.
 Transfer – Transfer means sending procedure of mail i.e. from the sender to recipient.
 Reporting – Reporting refers to confirmation for delivery of mail. It help user to check
whether their mail is delivered, lost or rejected.
 Displaying – It refers to present mail in form that is understand by the user.
 Disposition – This step concern with recipient that what will recipient do after
receiving mail i.e save mail, delete before reading or delete after reading.

ELECTRONIC MAIL SECURITY:

PRETTY GOOD PRIVACY (PGP) :

PGP provides the confidentiality and authentication service that can be used for electronic
mail and file storage applications. The steps involved in PGP are Select the best available
cryptographic algorithms as building blocks. Integrate these algorithms into a general purpose
application that is independent of operating system and processor and that is based on a small
set of easy-touse commands. Make the package and its documentation, including the source
code, freely available via the internet, bulletin boards and commercial networks. Enter into an
agreement with a company to provide a fully compatible, low cost commercial version of
PGP.

PGP has grown explosively and is now widely used. A number of reasons can be cited for
this growth. It is available free worldwide in versions that run on a variety of platform. It is
based on algorithms that have survived extensive public review and are considered extremely
secure. e.g., RSA, DSS and Diffie Hellman for public key encryption It has a wide range of
applicability.

It was not developed by, nor it is controlled by, any governmental or standards organization.
Operational description The actual operation of PGP consists of five services:

1. Authentication

2. Confidentiality

3. Compression

4. E-mail compatibility

5. Segmentation.

1. Authentication: The sequence for authentication is as follows: The sender creates the
message SHA-1 is used to generate a 160-bit hash code of the message The hash code is
encrypted with RSA using the sender‟s private key and the result is pretended to the message
The receiver uses RSA with the sender‟s public key to decrypt and recover the hash code.
The receiver generates a new hash code for the message and compares it with the decrypted
hash code. If the two match, the message is accepted as authentic.

2. Confidentiality: Confidentiality is provided by encrypting messages to be transmitted or


to be stored locally as files. In both cases, the conventional encryption algorithm CAST-128
may be used. The 64-bit cipher feedback (CFB) mode is used. In PGP, each conventional key
is used only once. That is, a new key is generated as a random 128-bit number for each
message. Thus although this is referred to as a session key, it is in reality a onetime key. To
protect the key, it is encrypted with the receiver‟s public key.

 The sequence for confidentiality is as follows: The sender generates a message and a
random 128-bit number to be used as a session key for this message only.
 The message is encrypted using CAST-128 with the session key. The session key is
encrypted with RSA, using the receiver‟s public key and is prepended to the message.
 The receiver uses RSA with its private key to decrypt and recover the session key.
The session key is used to decrypt the message.
 Confidentiality and authentication Here both services may be used for the same
message. First, a signature is generated for the plaintext message and prepended to the
message. Then the plaintext plus the signature is encrypted using CAST-128 and the
session key is encrypted using RSA.

3. Compression As a default, PGP compresses the message after applying the signature but
before encryption. This has the benefit of saving space for both e-mail transmission and for
file storage. The signature is generated before compression for two reasons

• It is preferable to sign an uncompressed message so that one can store only the
uncompressed message together with the signature for future verification. If one signed a
compressed document, then it would be necessary either to store a compressed version of the
message for later verification or to recompress the message when verification is required.

• Even if one were willing to generate dynamically a recompressed message from


verification, PGP‟s compression algorithm presents a difficulty. The algorithm is not
deterministic; various implementations of the algorithm achieve different tradeoffs in running
speed versus compression ratio and as a result, produce different compression forms.

• Message encryption is applied after compression to strengthen cryptographic security.


Because the compressed message has less redundancy than the original plaintext,
cryptanalysis is more difficult. The compression algorithm used is ZIP
4. E-mail compatibility Many electronic mail systems only permit the use of blocks
consisting of ASCII texts. To accommodate this restriction, PGP provides the service of
converting the raw 8-bit binary stream to a stream of printable ASCII characters. The scheme
used for this purpose is radix-64 conversion. Each group of three octets of binary data is
mapped into four ASCII characters.

5. Segmentation and reassembly E-mail facilities often are restricted to a maximum length.
E.g., many of the facilities accessible through the internet impose a maximum length of
50,000 octets. Any message longer than that must be broken up into smaller segments, each
of which is mailed separately. To accommodate this restriction, PGP automatically
subdivides a message that is too large into segments that are small enough to send via e-mail.
The segmentation is done after all the other processing, including the radix-64 conversion. At
the receiving end, PGP must strip off all e-mail headers and reassemble the entire original
block before performing the other steps.

Cryptographic keys and key rings Three:

separate requirements can be identified with respect to these keys:

A means of generating unpredictable session keys is needed. It must allow a user to have
multiple public key/private key pairs.
Each PGP entity must maintain a file of its own public/private key pairs as well as a file of
public keys of correspondent.

a.Session key generation Each session key is associated with a single message and is used
only for the purpose of encryption and decryption of that message. Random 128-bit numbers
are generated using CAST-128 itself. The input to the random number generator consists of a
128-bit key and two 64-bit blocks that are treated as plaintext to be encrypted. Using cipher
feedback mode, the CAST-128 produces two 64-bit cipher text blocks, which are
concatenated to form the 128-bit session key. The plaintext input to CAST-128 is itself
derived from a stream of 128-bit randomized numbers.These numbers are based on the
keystroke input from the user.

b. Key identifiers If multiple public/private key pair are used, then how does the recipient
know which of the public keys was used to encrypt the session key? One simple solution
would be to transmit the public key with the message but, it is unnecessary wasteful of space.
Another solution would be to associate an identifier with each public key that is unique at
least within each user. The solution adopted by PGP is to assign a key ID to each public key
that is, with very high probability, unique within a user ID. The key ID associated with each
public key consists of its least significant 64 bits. i.e., the key ID of public key KUa is (KUa
mod 264).

A message consists of three components.

 Message component – includes actual data to be transmitted, as well as the filename


and a timestamp that specifies the time of creation
 Session key component – includes session key and the identifier of the recipient
public key.
 Signature component – includes the following
 Timestamp – time at which the signature was made.
 Message digest – hash code.
 Two octets of message digest – to enable the recipient to determine if the correct
public key was used to decrypt the message.
 Key ID of sender’s public key – identifies the public key

Notation:

 EkUb= encryption with user B‟s Public key


 EKRa= encryption with user A‟s private key
 EKs = encryption with session key
 ZIP = Zip compression function
 R64 = Radix- 64 conversion function
Transmission and Reception of PGP message PGP provides a pair of data structures at
each node, one to store the public/private key pair owned by that node and one to
store the public keys of the other users known at that node. These data structures are
referred to as private key ring and public key ring. The general structures of the
private and public key rings are shown below:
 Timestamp - the date/time when this entry was made.
 Key ID - the least significant bits of the public key.
 Public key - public key portion of the pair.
 Private Key - private key portion of the pair.
 User ID - the owner of the key
 Key legitimacy field – indicates the extent to which PGP will trust that this is a valid
public key for this user.
 Signature trust field – indicates the degree to which this PGP user trusts the signer to
certify public key.
 Owner trust field - indicates the degree to which this public key is trusted to sign other
public key certificates.

PGP message generation First consider message transmission and assume that the
message is to be both signed and encrypted. The sending PGP entity performs the
following steps
1. Signing the message
• PGP retrieves the sender‟s private key from the private key ring using user ID
as an index. If user ID was not provided, the first private key from the ring is
retrieved.
• PGP prompts the user for the passphrase (password) to recover the unencrypted
private key.
• The signature component of the message is constructed.
2. Encrypting the message
• PGP generates a session key and encrypts the message.
• PGP retrieves the recipient‟s public key from the public key ring using user ID
as index.
The receiving PGP entity performs the following steps:

1.Decrypting the message

• PGP retrieves the receiver‟s private key from the private key ring, using the
key ID field in the session key component of the message as an index.
• PGP prompts the user for the passphrase (password) to recover the
unencrypted private key.
• PGP then recovers the session key and decrypts the message.

2.Authenticating the message

• PGP retrieves the sender‟s public key from the public key ring, using the key ID
field in the signature key component of the message as an index.
• PGP recovers the transmitted message digest.
• PGP computes the message digest for the received message and compares it to
the transmitted message digest to authenticate.
S/MIME:
S/MIME (Secure/Multipurpose Internet Mail Extension) is a security enhancement to the
MIME Internet e-mail format standard, based on technology from RSA Data Security.

Multipurpose Internet Mail Extensions

MIME is an extension to the RFC 822 framework that is intended to address some of the problems
and limitations of the use of SMTP (Simple Mail Transfer Protocol) or some other mail transfer
protocol and RFC 822 for electronic mail.

Following are the limitations of SMTP/822 scheme:

1. SMTP cannot transmit executable files or other binary objects.

2. SMTP cannot transmit text data that includes national language characters because these
are represented by 8-bit codes with values of 128 decimal or higher, and SMTP is limited to
7-bit ASCII. 3. SMTP servers may reject mail message over a certain size.

4. SMTP gateways that translate between ASCII and the character code EBCDIC do not use a
consistent set of mappings, resulting in translation problems.

5. SMTP gateways to X.400 electronic mail networks cannot handle non textual data
included in X.400 messages. 6. Some SMTP implementations do not adhere completely to
the SMTP standards defined in RFC 821.

Common problems include:

• Deletion, addition, or reordering of carriage return and linefeed

• Truncating or wrapping lines longer than 76 characters

• Removal of trailing white space (tab and space characters)

• Padding of lines in a message to the same length

• Conversion of tab characters into multiple space characters MIME is intended to resolve
these problems in a manner that is compatible with existing RFC 822 implementations. The
specification is provided in RFCs 2045 through 2049.

OVERVIEW

The MIME specification includes the following elements:

1. Five new message header fields are defined, which may be included in an RFC 822 header.
These fields provide information about the body of the message.

2. A number of content formats are defined, thus standardizing representations that support
multimedia electronic mail.
3. Transfer encodings are defined that enable the conversion of any content format into a
form that is protected from alteration by the mail system. In this subsection, we introduce the
five message header fields.

The next two subsections deal with content formats and transfer encodings. T

he five header fields defined in MIME are as follows:

• MIME-Version: Must have the parameter value 1.0. This field indicates that the message
conforms to RFCs 2045 and 2046.

• Content-Type: Describes the data contained in the body with sufficient detail

. • Content-Transfer-Encoding: Indicates the type of transformation that has been used to


represent the body of the message in a way that is acceptable for mail transport.

• Content-ID: Used to identify MIME entities uniquely in multiple contexts.

• Content-Description: A text description of the object with the body; this is useful when the
object is not readable (e.g., audio data). MIME Content Types There are seven different
major types of content and a total of 15 subtypes
Multipurpose Internet Mail Extensions:

Multipurpose Internet Mail Extension (MIME) is an extension to the RFC 5322 framework
that is intended to address some of the problems and limitations of the use of Simple Mail
Transfer Protocol (SMTP), defined in RFC 821, or some other mail transfer protocol and
RFC 5322 for electronic mail.

[PARZ06] lists the following limitations of the SMTP/5322 scheme.

1. SMTP cannot transmit executable files or other binary objects. A number of schemes are in
use for converting binary files into a text form that can be used by SMTP mail systems,
including the popular UNIX UUencode/ UUdecode scheme. However, none of these is a
standard or even a de facto standard.

2. SMTP cannot transmit text data that includes national language characters, because these
are represented by 8-bit codes with values of 128 decimal or higher, and SMTP is limited to
7-bit ASCII. 3. SMTP servers may reject mail message over a certain size.

4. SMTP gateways that translate between ASCII and the character code EBCDIC do not use
a consistent set of mappings, resulting in translation problems.

5. SMTP gateways to X.400 electronic mail networks cannot handle nontextual data included
in X.400 messages.

6. Some SMTP implementations do not adhere completely to the SMTP standards defined in
RFC 821.

Common problems include:

• Deletion, addition, or reordering of carriage return and linefeed

• Truncating or wrapping lines longer than 76 characters

• Removal of trailing white space (tab and space characters)

• Padding of lines in a message to the same length


• Conversion of tab characters into multiple space characters MIME is intended to resolve
these problems in a manner that is compatible with existing RFC 5322 implementations. The
specification is provided in RFCs 2045 through 2049. Functions

• Enveloped data: This consists of encrypted content of any type and encryptedcontent
encryption keys for one or more recipients.

• Signed data: A digital signature is formed by taking the message digest of the content to be
signed and then encrypting that with the private key of the signer. The content plus signature
are then encoded using base64 encoding. A signed data message can only be viewed by a
recipient with S/MIME capability.

• Clear-signed data: As with signed data, a digital signature of the content is formed.
However, in this case, only the digital signature is encoded using base64. As a result,
recipients without S/MIME capability can view the message content, although they cannot
verify the signature.

• Signed and enveloped data: Signed-only and encrypted-only entities may be nested, so that
encrypted data may be signed and signed data or clear-signed data may be encrypted.
Cryptographic Algorithms

• MUST: The definition is an absolute requirement of the specification. An implementation


must include this feature or function to be in conformance with the specification.

• SHOULD: There may exist valid reasons in particular circumstances to ignore this feature
or function, but it is recommended that an implementation include the feature or function.

Enhanced Security Services

• Signed receipts: A signed receipt may be requested in a SignedData object. Returning a


signed receipt provides proof of delivery to the originator of a message and allows the
originator to demonstrate to a third party that the recipient received the message. In essence,
the recipient signs the entire original message plus the original (sender’s) signature and
appends the new signature to form a new S/MIME message.

• Security labels: A security label may be included in the authenticated attributes of a


SignedData object. A security label is a set of security information regarding the sensitivity
of the content that is protected by S/MIME encapsulation. The labels may be used for access
control, by indicating which users are permitted access to an object. Other uses include
priority (secret, confidential, restricted, and so on) or role based, describing which kind of
people can see the information (e.g., patient’s health-care team, medical billing agents, etc.).

• Secure mailing lists: When a user sends a message to multiple recipients, a certain amount
of per-recipient processing is required, including the use of each recipient’s public key. The
user can be relieved of this work by employing the services of an S/MIME Mail List Agent
(MLA). An MLA can take a single incoming message, perform the recipient-specific
encryption for each recipient, and forward the message. The originator of a message need
only send the message to the MLA with encryption performed using the MLA’s public key.

IP SECURITY:
OVERVIEW OF IPSEC

IP Sec (Internet Protocol Security) is an Internet Engineering Task Force (IETF) standard
suite of protocols between two communication points across the IP network that provide
data authentication, integrity, and confidentiality. It also defines the encrypted, decrypted,
and authenticated packets. The protocols needed for secure key exchange and key
management are defined in it.

Applications of IPSec:

IPSec provides the capability to secure communications across a LAN, across private and
public WANs, and across the Internet.

Examples of its use include the following:

• Secure branch office connectivity over the Internet

• 2 Secure remote access over the Internet

• Establishing extranet and intranet connectivity with partners

• Enhancing electronic commerce security

Components of IP Security:
1. Encapsulating Security Payload (ESP)
2. Authentication Header (AH)
3. Internet Key Exchange (IKE)
1. Encapsulating Security Payload (ESP): It provides data integrity, encryption,
authentication, and anti-replay. It also provides authentication for payload.
2. Authentication Header (AH): It also provides data integrity, authentication, and anti-
replay and it does not provide encryption. The anti-replay protection protects against the
unauthorized transmission of packets. It does not protect data confidentiality.

3. Internet Key Exchange (IKE): It is a network security protocol designed to


dynamically exchange encryption keys and find a way over Security Association (SA)
between 2 devices. The Security Association (SA) establishes shared security attributes
between 2 network entities to support secure communication. The Key Management
Protocol (ISAKMP) and Internet Security Association provides a framework for
authentication and key exchange. ISAKMP tells how the setup of the Security Associations
(SAs) and how direct connections between two hosts are using IPsec. Internet Key
Exchange (IKE) provides message content protection and also an open frame for
implementing standard algorithms such as SHA and MD5. The algorithm’s IP sec users
produce a unique identifier for each packet. This identifier then allows a device to
determine whether a packet has been correct or not. Packets that are not authorized are
discarded and not given to the receiver.

IP Security Architecture
IPSec (IP Security) architecture uses two protocols to secure the traffic or data flow. These
protocols are ESP (Encapsulation Security Payload) and AH (Authentication Header).
IPSec Architecture includes protocols, algorithms, DOI, and Key Management. All these
components are very important in order to provide the three main services:
 Confidentiality
 Authenticity
 Integrity
Working on IP Security
 The host checks if the packet should be transmitted using IPsec or not. This packet
traffic triggers the security policy for itself. This is done when the system sending the
packet applies appropriate encryption. The incoming packets are also checked by the
host that they are encrypted properly or not.
 Then IKE Phase 1 starts in which the 2 hosts( using IPsec ) authenticate themselves to
each other to start a secure channel. It has 2 modes. The Main mode provides greater
security and the Aggressive mode which enables the host to establish an IPsec circuit
more quickly.
 The channel created in the last step is then used to securely negotiate the way the IP
circuit will encrypt data across the IP circuit.
 Now, the IKE Phase 2 is conducted over the secure channel in which the two hosts
negotiate the type of cryptographic algorithms to use on the session and agree on secret
keying material to be used with those algorithms.
 Then the data is exchanged across the newly created IPsec encrypted tunnel. These
packets are encrypted and decrypted by the hosts using IPsec SAs.
 When the communication between the hosts is completed or the session times out then
the IPsec tunnel is terminated by discarding the keys by both hosts.

Features of IPSec
1. Authentication: IPSec provides authentication of IP packets using digital signatures or
shared secrets. This helps ensure that the packets are not tampered with or forged.
2. Confidentiality: IPSec provides confidentiality by encrypting IP packets, preventing
eavesdropping on the network traffic.
3. Integrity: IPSec provides integrity by ensuring that IP packets have not been modified
or corrupted during transmission.
4. Key management: IPSec provides key management services, including key exchange
and key revocation, to ensure that cryptographic keys are securely managed.
5. Tunneling: IPSec supports tunneling, allowing IP packets to be encapsulated within
another protocol, such as GRE (Generic Routing Encapsulation) or L2TP (Layer 2
Tunneling Protocol).
6. Flexibility: IPSec can be configured to provide security for a wide range of network
topologies, including point-to-point, site-to-site, and remote access connections.
7. Interoperability: IPSec is an open standard protocol, which means that it is supported
by a wide range of vendors and can be used in heterogeneous environments.
Advantages of IPSec
1. Strong security: IPSec provides strong cryptographic security services that help protect
sensitive data and ensure network privacy and integrity.
2. Wide compatibility: IPSec is an open standard protocol that is widely supported by
vendors and can be used in heterogeneous environments.
3. Flexibility: IPSec can be configured to provide security for a wide range of network
topologies, including point-to-point, site-to-site, and remote access connections.
4. Scalability: IPSec can be used to secure large-scale networks and can be scaled up or
down as needed.
5. Improved network performance: IPSec can help improve network performance by
reducing network congestion and improving network efficiency.
Disadvantages of IPSec
1. Configuration complexity: IPSec can be complex to configure and requires specialized
knowledge and skills.
2. Compatibility issues: IPSec can have compatibility issues with some network devices
and applications, which can lead to interoperability problems.
3. Performance impact: IPSec can impact network performance due to the overhead of
encryption and decryption of IP packets.
4. Key management: IPSec requires effective key management to ensure the security of
the cryptographic keys used for encryption and authentication.
5. Limited protection: IPSec only provides protection for IP traffic, and other protocols
such as ICMP, DNS, and routing protocols may still be vulnerable to attacks.

ESP, AH Protocols IPSec Modes:


Encapsulating Security Payload (ESP)

• RFC 4303 (IP Encapsulating Security Payload)

• ESP allows for encryption, as well as authentication.

– Both are optional, defined by the SPI and policies.

• A null encryption algorithm was proposed

– Thus AH in a sense is not needed

– Protocol type in IP header is set to 50

• ESP does not protect the IP header, only the payload

– in tunnel mode original packet is encrypted

– In transport mode original packet data is encrypted

– This includes higher level protocols and ports. (NATs and firewalls may need this
information).

• ESP header is actually a header plus a trailer as it “surrounds” the packet data

• Can actually combine AH and ESP but rarely done

Services provided include:

– Confidentiality

– Data origin authentication

– Connectionless integrity

– Anti-replay service

– Limited traffic flow confidentiality

• Security services can be provided between


– A pair of communicating hosts

– A pair of security gateways

– A security gateway and a host

• The “header” fields

– SPI

– Sequence Number

• The “data” part

– Optionally may have an IV added (in clear if necessary)

– Has variable length padding

• Sometimes needed for encryption

• Sometimes masks encryption

• Sometimes used to mask traffic flow

• The “trailer” part

– Padding length

– Next header
• In tunnel mode would be set to 4

• In transport mode would be set to original packet data

• ESP can also have NAT/PAT problems

– If transport layer information is used.

AH Protocols IP Sec Modes:


Authentication Header (AH)

• RFC 4302 (IP Authentication Header)

• The IP AH is used to provide

– Connectionless integrity

– Data origin authentication

– Protection against replays.


• AH provides authentication for as much of the IP header as possible, but cannot all be
protected by AH.

• Data privacy is not provided by AH (all data is in the clear).

Next Header: protocol type of following payload

Payload Length: length (in 32 bit words) of the AH Header minus 2 (note that it is
actually the AH header length, instead of payload length)

Sequence Number: monotonically increasing number

Authentication Data: Integrity check value (ICV) over most of the packet.

IP Sec in AH Transport Mode:

• AH covers all immutable fields of IP & AH headers and payload by computing a


MAC.

• Does not cover

– IP Header: TOS, flags, frag offset, TTL, header checksum, (Note: covers pkt len
modified value)

– AH Header: Authentication Data

• Modification of the IP Header

– protocol field changed to AH = 51

• current value of protocol field inserted into IP Sec Header

– Packet length field changed.


IPSec in AH Tunnel Mode:

• AH covers all immutable fields of the headers and payload

• Does not cover

– IP Header: TOS, flags, frag offset, TTL, header checksum

– AH Header: Authentication Data

• New IP Header is created with appropriate source and destination IP addresses

– protocol field set to AH = 51

• IPSec Header

– next field is set to IP = 4


• HMAC incorporates a secret key

• Exact authentication function and keys negotiated by end points

• Tunnel Mode vs. Transport Mode identified by the next header type in the IPSec Header
(also true of ESP)

– if 4 then must be Tunnel mode

– else Transport mode

• AH is incompatible with NAT / PAT devices

– Network Address Translation

– Port address translation

– change of (private) source address, for example, at a NAT box does not allow re-
computation of the HMAC by the destination

Security Associations:
An IPSec protected connection is called a security association
• The SPI used in identifying the SA is normally chosen by the receiving system (destination)

• Basic Processing

– for outbound packets, a packet’s selector is used to determine the processing to be applied
to the packet

– More complex than for inbound where the received SPI, destination address and protocol
type uniquely point to an SA.

Some Security Association Selectors

• Destination IP address

• Source IP address

• Name

• Next layer protocol

• RFC 4301
KEY MANAGEMENT: The key management portion of IPSec involves the
determination and distribution of secret keys.

Two types of key management:

Manual: A system administrator manually configures each system with its own keys
and with the keys of other communicating systems. This is practical for small, relatively
static environments.

Automated: An automated system enables the on-demand creation of keys for SAs
and facilitates the use of keys in a large distributed system with an evolving configuration.
The default automated key management protocol for IPSec is referred to as ISAKMP/Oakley
and consists of the following elements:

 Oakley Key Determination Protocol: Oakley is a key exchange protocol


based on the Diffie-Hellman algorithm but providing added security. Oakley is
generic in that it does not dictate specific formats.
 Internet Security Association and Key Management Protocol
(ISAKMP):ISAKMP provides a framework for Internet key management and
provides the specific protocol support, including formats, for negotiation of
security attributes.
UNIT V WEB SECURITY

Web Security: Requirements- Secure Sockets Layer- Objectives-Layers –


SSL secure communication-Protocols - Transport Level Security. Secure
Electronic Transaction- Entities DS Verification-SET processing.

Web Security:
Web security refers to protecting networks and computer systems from damage to or the theft
of software, hardware, or data. It includes protecting computer systems from misdirecting or
disrupting the services they are designed to provide.

A collection of application:

-layer services used to distribute content

– Web content (HTML)

– Multimedia

– Email

– Instant messaging

• Many applications:

– News outlets, entertainment, education, research and technology, …

– Commercial, consumer and B2B

• The largest distributed system in existence:

– threats are as diverse as applications and users

– But need to be thought out carefully …

WEB SECURITY CONSIDERATIONS:

The World Wide Web is fundamentally a client/server application running over the Internet
and TCP/IP intranets.

Security Consideration:

 Updated Software: You need to always update your software. Hackers may be aware
of vulnerabilities in certain software, which are sometimes caused by bugs and can be
used to damage your computer system and steal personal data. Older versions of
software can become a gateway for hackers to enter your network. Software makers
soon become aware of these vulnerabilities and will fix vulnerable or exposed areas.
That’s why It is mandatory to keep your software updated, It plays an important role in
keeping your personal data secure.
 Beware of SQL Injection: SQL Injection is an attempt to manipulate your data or your
database by inserting a rough code into your query. For e.g. somebody can send a query
to your website and this query can be a rough code while it gets executed it can be used
to manipulate your database such as change tables, modify or delete data or it can
retrieve important information also so, one should be aware of the SQL injection attack.
 Cross-Site Scripting (XSS): XSS allows the attackers to insert client-side script into
web pages. E.g. Submission of forms. It is a term used to describe a class of attacks that
allow an attacker to inject client-side scripts into other users’ browsers through a
website. As the injected code enters the browser from the site, the code is reliable and
can do things like sending the user’s site authorization cookie to the attacker.
 Error Messages: You need to be very careful about error messages which are
generated to give the information to the users while users access the website and some
error messages are generated due to one or another reason and you should be very
careful while providing the information to the users. For e.g. login attempt – If the user
fails to login the error message should not let the user know which field is incorrect:
Username or Password.
 Data Validation: Data validation is the proper testing of any input supplied by the user
or application. It prevents improperly created data from entering the information
system. Validation of data should be performed on both server-side and client-side. If
we perform data validation on both sides that will give us the authentication. Data
validation should occur when data is received from an outside party, especially if the
data is from untrusted sources.
 Password: Password provides the first line of defense against unauthorized access to
your device and personal information. It is necessary to use a strong password. Hackers
in many cases use sophisticated software that uses brute force to crack passwords.
Passwords must be complex to protect against brute force. It is good to enforce
password requirements such as a minimum of eight characters long must including
uppercase letters, lowercase letters, special characters, and numerals.
Web Security Threats:
Two types of attacks are:

Passive attacks include eavesdropping on network traffic between browser and server and
gaining access to information on a Web site that is supposed to be restricted.

Active attacks include impersonating another user, altering messages in transit between
client and server, and altering information on a Web site.

Security Threats:

A Threat is nothing but a possible event that can damage and harm an information system.
Security Threat is defined as a risk that which, can potentially harm Computer systems &
organizations. Whenever an Individual or an Organization creates a website, they are
vulnerable to security attacks.
Security attacks are mainly aimed at stealing altering or destroying a piece of personal and
confidential information, stealing the hard drive space, and illegally accessing passwords.
So whenever the website you created is vulnerable to security attacks then the attacks are
going to steal your data alter your data destroy your personal information see your
confidential information and also it accessing your password.
Transport Layer Security:
One of the most widely used security services is Transport Layer Security (TSL); the current
version is Version 1.2, defined in RFC 5246. TLS is an Internet standard that evolved from a
commercial protocol known as Secure Sockets Layer (SSL). Although SSL implementations
are still around, it has been deprecated by IETF and is disabled by most corporations offering
TLS software. TLS is a generalpurpose service implemented as a set of protocols that rely on
TCP. At this level, there are two implementation choices. For full generality, TLS could be
provided as part of the underlying protocol suite and therefore be transparent to applications.
Alternatively, TLS can be embedded in specific packages. For example, most browsers come
equipped with TLS, and most Web servers have implemented the protocol.
TLS Architecture :
TLS is designed to make use of TCP to provide a reliable end-to-end secure service.
The TLS Record Protocol provides basic security services to various higherlayer protocols. In
particular, the Hypertext Transfer Protocol (HTTP), which provides the transfer service for
Web client/server interaction, can operate on top of TLS.

Three higher-layer protocols are defined as part of TLS:

 The Handshake Protocol;


 The Change Cipher Spec Protocol;
 The Alert Protocol.

These TLSspecific protocols are used in the management of TLS exchanges and are
examined later in this section. A fourth protocol, the Heartbeat Protocol, is defined in a
separate RFC and is also discussed subsequently in this section. Two important TLS concepts
are the TLS session and the TLS connection,

which are defined in the specification as follows:

■ Connection: A connection is a transport (in the OSI layering model definition) that
provides a suitable type of service. For TLS, such connections are peer-topeer relationships.
The connections are transient. Every connection is associated with one session.

■ Session: A TLS session is an association between a client and a server. Sessions are
created by the Handshake Protocol. Sessions define a set of cryptographic security
parameters, which can be shared among multiple connections. Sessions are used to avoid the
expensive negotiation of new security parameters for each connection.

Between any pair of parties (applications such as HTTP on client and server), there may be
multiple secure connections. In theory, there may also be multiple simultaneous sessions
between parties, but this feature is not used in practice. There are a number of states
associated with each session. Once a session is established, there is a current operating state
for both read and write (i.e., receive and send). In addition, during the Handshake Protocol,
pending read and write states are created. Upon successful conclusion of the Handshake
Protocol, the pending states become the current states.
A session state is defined by the following parameters:

■ Session identifier: An arbitrary byte sequence chosen by the server to identify an active or
resumable session state.

■ Peer certificate: An X509.v3 certificate of the peer. This element of the state may be null

Compression method: The algorithm used to compress data prior to encryption.

■ Cipher spec: Specifies the bulk data encryption algorithm (such as null, AES, etc.) and a
hash algorithm (such as MD5 or SHA-1) used for MAC calculation. It also defines
cryptographic attributes such as the hash_size.

■ Master secret: 48-byte secret shared between the client and server.

■ Is resumable: A flag indicating whether the session can be used to initiate new
connections. A connection state is defined by the following parameters:

■ Server and client random: Byte sequences that are chosen by the server and client for each
connection.

■ Server write MAC secret: The secret key used in MAC operations on data sent by the
server.

■ Client write MAC secret: The symmetric key used in MAC operations on data sent by
the client.

■ Server write key: The symmetric encryption key for data encrypted by the server and
decrypted by the client.

■ Client write key: The symmetric encryption key for data encrypted by the client and
decrypted by the server.

■ Initialization vectors: When a block cipher in CBC mode is used, an initialization vector
(IV) is maintained for each key. This field is first initialized by the TLS Handshake Protocol.
Thereafter, the final ciphertext block from each record is preserved for use as the IV with the
following record.

■ Sequence numbers: Each party maintains separate sequence numbers for transmitted and
received messages for each connection. When a party sends or receives a “change cipher spec
message,” the appropriate sequence number is set to zero. Sequence numbers may not exceed
264 - 1. TLS Record Protocol The TLS Record Protocol provides two services for TLS
connections:

■ Confidentiality: The Handshake Protocol defines a shared secret key that is used for
conventional encryption of TLS payloads.

■ Message Integrity: The Handshake Protocol also defines a shared secret key that is used to
form a message authentication code (MAC).
Figure 17.3 indicates the overall operation of the TLS Record Protocol. The Record Protocol
takes an application message to be transmitted, fragments the data into manageable blocks,
optionally compresses the data, applies a MAC, encrypts, adds a header, and transmits the
resulting unit in a TCP segment. Received data are decrypted, verified, decompressed, and
reassembled before being delivered to higher-level users. The first step is fragmentation.
Each upper-layer message is fragmented into blocks of 214 bytes (16,384 bytes) or less.
Next, compression is optionally applied. Compression must be lossless and may not increase
the content length by more than

1024 bytes.1 In TLSv2, no compression algorithm is specified, so the default compression


algorithm is null. The next step in processing is to compute a message authentication code
over the compressed data.

TLS makes use of the HMAC algorithm defined in RFC 2104.

Recall from Chapter 12 that HMAC is defined as

HMACK(M) = H[(K+ ⊕ opad) ‘ H[(K+ ⊕ ipad) ‘ M]]

Where

H = embedded hash function (for TLS, either MD5 or SHA-1)

M = message input to HMAC

K+ = secret key padded with zeros on the left so that the result is equal to the block length of
the hash code (for MD5 and SHA-1, block length = 512 bits)

ipad = 00110110 (36 in hexadecimal) repeated 64 times (512 bits)

opad = 01011100 (5C in hexadecimal) repeated 64 times (512 bits) For TLS, the MAC
calculation encompasses the fields indicated in the following expression:
HMAC_hash(MAC_write_secret, seq_num ‘ TLSCompressed.type ‘ TLSCompressed
.version ‘ TLS Compressed.
length ‘ TLSCompressed.fragment) The MAC calculation covers all of the fields XXX, plus
the field TLSCompressed.version, which is the version of the protocol being employed. Next,
the compressed message plus the MAC are encrypted using symmetric encryption.
Encryption may not increase the content length by more than 1024 bytes,

so that the total length may not exceed 214 + 2048. The following encryption algorithms are
permitted:

For stream encryption, the compressed message plus the MAC are encrypted. Note that the
MAC is computed before encryption takes place and that the MAC is then encrypted along
with the plaintext or compressed plaintext. For block encryption, padding may be added after
the MAC prior to encryption. The padding is in the form of a number of padding bytes
followed by a onebyte indication of the length of the padding. The padding can be any
amount that results in a total that is a multiple of the cipher’s block length, up to a maximum
of 255 bytes. For example, if the cipher block length is 16 bytes (e.g., AES) and if the
plaintext (or compressed text if compression is used) plus MAC plus padding length byte is
79 bytes long, then the padding length (in bytes) can be 1, 17, 33, and so on, up to 161. At a
padding length of 161, the total length is 79 + 161 = 240. A variable padding length may be
used to frustrate attacks based on an analysis of the lengths of exchanged messages.

The final step of TLS Record Protocol processing is to prepend a header consisting of the
following fields:
■ Content Type (8 bits): The higher-layer protocol used to process the enclosed fragment.

■ Major Version (8 bits): Indicates major version of TLS in use. For TLSv2, the value is 3.

■ Minor Version (8 bits): Indicates minor version in use. For TLSv2, the value is 1.

■ Compressed Length (16 bits): The length in bytes of the plaintext fragment (or compressed
fragment if compression is used). The maximum value is 214 + 2048. The content types that
have been defined are change_cipher_spec, alert, handshake, and application_data. The first
three are the TLSspecific protocols, discussed next. Note that no distinction is made among
the various applications (e.g., HTTP) that might use TLS; the content of the data created by
such applications is opaque to TLS.

Handshake Protocol :

The most complex part of TLS is the Handshake Protocol. This protocol allows the server
and client to authenticate each other and to negotiate an encryption and MAC algorithm and
cryptographic keys to be used to protect data sent in a TLS record. The Handshake Protocol
is used before any application data is transmitted. The Handshake Protocol consists of a
series of messages exchanged by client and server.

All of these have the format shown in Figure 17.5c . Each message has three fields:

■ Type (1 byte): Indicates one of 10 messages. Table 17.2 lists the defined message types. ■
Length (3 bytes): The length of the message in bytes.

■ Content (# 0 bytes): The parameters associated with this message; these are listed in Table
17.2. Figure 17.6 shows the initial exchange needed to establish a logical connection between
client and server. The exchange can be viewed as having four phases.

PHASE 1. ESTABLISH SECURITY CAPABILITIES:

Phase 1 initiates a logical connection and establishes the security capabilities that will be
associated with it.

The exchange is initiated by the client, which sends a client_hello message with the
following parameters:

■ Version: The highest TLS version understood by the client.

■ Random: A client-generated random structure consisting of a 32-bit timestamp and 28


bytes generated by a secure random number generator. These values serve as nonces and are
used during key exchange to prevent replay attacks.

■ Session ID: A variable-length session identifier. A nonzero value indicates that the client
wishes to update the parameters of an existing connection or to create a new connection on
this session. A zero value indicates that the client wishes to establish a new connection on a
new session.
WIRELESS SECURITY :

Wireless networks, and the wireless devices that use them, introduce a host of security
problems over and above those found in wired networks. Some of the key factors
contributing to the higher security risk of wireless networks compared to wired networks
include the following [MA10]:

■ Channel: Wireless networking typically involves broadcast communications, which is far


more susceptible to eavesdropping and jamming than wired networks. Wireless networks are
also more vulnerable to active attacks that exploit vulnerabilities in communications
protocols.
■ Mobility: Wireless devices are, in principal and usually in practice, far more portable and
mobile than wired devices. This mobility results in a number of risks, described subsequently.

■ Resources: Some wireless devices, such as smartphones and tablets, have sophisticated
operating systems but limited memory and processing resources with which to counter
threats, including denial of service and malware.

■ Accessibility: Some wireless devices, such as sensors and robots, may be left unattended
in remote and/or hostile locations. This greatly increases their vulnerability to physical
attacks.

In simple terms, the wireless environment consists of three components that provide point of
attack (Figure 18.1). The wireless client can be a cell phone, a Wi-Fi–enabled laptop or
tablet, a wireless sensor, a Bluetooth device, and so on. The wireless access point provides a
connection to the network or service. Examples of access points are cell towers, Wi-Fi
hotspots, and wireless access points to wired local or wide area networks. The transmission
medium, which carries the radio waves for data transfer, is also a source of vulnerability.

Wireless Network Threats [CHOI08] lists the following security threats to wireless networks:

■ Accidental association: Company wireless LANs or wireless access points to wired LANs
in close proximity (e.g., in the same or neighboring buildings) may create overlapping
transmission ranges. A user intending to connect to one LAN may unintentionally lock on to
a wireless access point from a neighboring network. Although the security breach is
accidental, it nevertheless exposes resources of one LAN to the accidental user.

■ Malicious association: In this situation, a wireless device is configured to appear to be a


legitimate access point, enabling the operator to steal passwords from legitimate users and
then penetrate a wired network through a legitimate wireless access point.

■ Ad hoc networks: These are peer-to-peer networks between wireless computers with no
access point between them. Such networks can pose a security threat due to a lack of a central
point of control.

■ Nontraditional networks: Nontraditional networks and links, such as personal network


Bluetooth devices, barcode readers, and handheld PDAs, pose a security risk in terms of both
eavesdropping and spoofing.
■ Identity theft (MAC spoofing): This occurs when an attacker is able to eavesdrop on
network traffic and identify the MAC address of a computer with network privileges.

■ Man-in-the middle attacks: This type of attack is described in Chapter 10 in the context of
the Diffie–Hellman key exchange protocol. In a broader sense, this attack involves
persuading a user and an access point to believe that they are talking to each other when in
fact the communication is going through an intermediate attacking device. Wireless networks
are particularly vulnerable to such attacks

Denial of service (DoS): This type of attack is discussed in detail in Chapter 21. In the
context of a wireless network, a DoS attack occurs when an attacker continually bombards a
wireless access point or some other accessible wireless port with various protocol messages
designed to consume system resources. The wireless environment lends itself to this type of
attack, because it is so easy for the attacker to direct multiple wireless messages at the target.

■ Network injection: A network injection attack targets wireless access points that are
exposed to nonfiltered network traffic, such as routing protocol messages or network
management messages. An example of such an attack is one in which bogus reconfiguration
commands are used to affect routers and switches to degrade network performance. Wireless
Security Measures Following [CHOI08], we can group wireless security measures into those
dealing with wireless transmissions, wireless access points, and wireless networks (consisting
of wireless routers and endpoints). SECURING WIRELESS TRANSMISSIONS The
principal threats to wireless transmission are eavesdropping, altering or inserting messages,
and disruption.

To deal with eavesdropping, two types of countermeasures are appropriate:

■ Signal-hiding techniques: Organizations can take a number of measures to make it more


difficult for an attacker to locate their wireless access points, including turning off service set
identifier (SSID) broadcasting by wireless access points; assigning cryptic names to SSIDs;
reducing signal strength to the lowest level that still provides requisite coverage; and locating
wireless access points in the interior of the building, away from windows and exterior walls.
Greater security can be achieved by the use of directional antennas and of signal-shielding
techniques.

■ Encryption: Encryption of all wireless transmission is effective against eavesdropping to


the extent that the encryption keys are secured. The use of encryption and authentication
protocols is the standard method of countering attempts to alter or insert transmissions. The
methods discussed in Chapter 21 for dealing with DoS apply to wireless transmissions.
Organizations can also reduce the risk of unintentional DoS attacks. Site surveys can detect
the existence of other devices using the same frequency range, to help determine where to
locate wireless access points. Signal strengths can be adjusted and shielding used in an
attempt to isolate a wireless environment from competing nearby transmissions.
SECURING WIRELESS ACCESS POINTS:

The main threat involving wireless access points is unauthorized access to the network. The
principal approach for preventing such access is the IEEE 802.1X standard for port-based
network access control. The standard provides an authentication mechanism for devices
wishing to attach to a LAN or wireless network. The use of 802.1X can prevent rogue access
points and other unauthorized devices from becoming insecure backdoors. Section 16.3
provides an introduction to 802.1X

SECURING WIRELESS NETWORKS [CHOI08]

recommends the following techniques for wireless network security:

1. Use encryption. Wireless routers are typically equipped with built-in encryption
mechanisms for router-to-router traffic
2. Use antivirus and antispyware software, and a firewall. These facilities should be enabled
on all wireless network endpoints.
3. Turn off identifier broadcasting. Wireless routers are typically configured to broadcast
an identifying signal so that any device within range can learn of the router’s existence. If
a network is configured so that authorized devices know the identity of routers, this
capability can be disabled, so as to thwart attackers.
4. Change the identifier on your router from the default. Again, this measure thwarts
attackers who will attempt to gain access to a wireless network using default router
identifiers.
5. Change your router’s pre-set password for administration. This is another prudent step.
6. Allow only specific computers to access your wireless network. A router can be
configured to only communicate with approved MAC addresses. Of course, MAC
addresses can be spoofed, so this is just one element of a security strategy.

Secure Socket Layer (SSL):


Secure Socket Layer (SSL) provides security to the data that is transferred between web
browser and server. SSL encrypts the link between a web server and a browser which ensures
that all data passed between them remain private and free from attack.
Secure Socket Layer Protocols:
 SSL record protocol
 Handshake protocol
 Change-cipher spec protocol
 Alert protocol
SSL Protocol Stack:
SSL Record Protocol:
SSL Record provides two services to SSL connection.
 Confidentiality
 Message Integrity
In the SSL Record Protocol application data is divided into fragments. The fragment is
compressed and then encrypted MAC (Message Authentication Code) generated by
algorithms like SHA (Secure Hash Protocol) and MD5 (Message Digest) is appended. After
that encryption of the data is done and in last SSL header is appended to the data.

Handshake Protocol:
Handshake Protocol is used to establish sessions. This protocol allows the client and server to
authenticate each other by sending a series of messages to each other. Handshake protocol
uses four phases to complete its cycle.
 Phase-1: In Phase-1 both Client and Server send hello-packets to each other. In this IP
session, cipher suite and protocol version are exchanged for security purposes.
 Phase-2: Server sends his certificate and Server-key-exchange. The server end phase-2 by
sending the Server-hello-end packet.
 Phase-3: In this phase, Client replies to the server by sending his certificate and Client-
exchange-key.
 Phase-4: In Phase-4 Change-cipher suite occurs and after this the Handshake Protocol
ends.
Change-cipher Protocol:
This protocol uses the SSL record protocol. Unless Handshake Protocol is completed, the
SSL record Output will be in a pending state. After the handshake protocol, the Pending state
is converted into the current state.
Change-cipher protocol consists of a single message which is 1 byte in length and can have
only one value. This protocol’s purpose is to cause the pending state to be copied into the
current state.
Alert Protocol:
This protocol is used to convey SSL-related alerts to the peer entity. Each message in this
protocol contains 2 bytes.

Warning (level = 1):


This Alert has no impact on the connection between sender and receiver. Some of them are:
Bad certificate: When the received certificate is corrupt.
No certificate: When an appropriate certificate is not available.
Certificate expired: When a certificate has expired.
Certificate unknown: When some other unspecified issue arose in processing the certificate,
rendering it unacceptable.
Close notify: It notifies that the sender will no longer send any messages in the connection.
Unsupported certificate: The type of certificate received is not supported.
Certificate revoked: The certificate received is in revocation list.

Fatal Error (level = 2):


This Alert breaks the connection between sender and receiver. The connection will be
stopped, cannot be resumed but can be restarted. Some of them are :
Handshake failure: When the sender is unable to negotiate an acceptable set of security
parameters given the options available.
Decompression failure: When the decompression function receives improper input.
Illegal parameters: When a field is out of range or inconsistent with other fields.
Bad record MAC: When an incorrect MAC was received.
Unexpected message: When an inappropriate message is received.
The second byte in the Alert protocol describes the error.
Salient Features of Secure Socket Layer:
 The advantage of this approach is that the service can be tailored to the specific needs of
the given application.
 Secure Socket Layer was originated by Netscape.
 SSL is designed to make use of TCP to provide reliable end-to-end secure service.
 This is a two-layered protocol.
Versions of SSL:
SSL 1 – Never released due to high insecurity.
SSL 2 – Released in 1995.
SSL 3 – Released in 1996.
TLS 1.0 – Released in 1999.
TLS 1.1 – Released in 2006.
TLS 1.2 – Released in 2008.
TLS 1.3 – Released in 2018.

SSL (Secure Sockets Layer) certificate is a digital certificate used to secure and verify the
identity of a website or an online service. The certificate is issued by a trusted third-party
called a Certificate Authority (CA), who verifies the identity of the website or service before
issuing the certificate.
The SSL certificate has several important characteristics that make it a reliable solution for
securing online transactions:
1. Encryption: The SSL certificate uses encryption algorithms to secure the communication
between the website or service and its users. This ensures that the sensitive information,
such as login credentials and credit card information, is protected from being intercepted
and read by unauthorized parties.
2. Authentication: The SSL certificate verifies the identity of the website or service,
ensuring that users are communicating with the intended party and not with an impostor.
This provides assurance to users that their information is being transmitted to a trusted
entity.
3. Integrity: The SSL certificate uses message authentication codes (MACs) to detect any
tampering with the data during transmission. This ensures that the data being transmitted
is not modified in any way, preserving its integrity.
4. Non-repudiation: SSL certificates provide non-repudiation of data, meaning that the
recipient of the data cannot deny having received it. This is important in situations where
the authenticity of the information needs to be established, such as in e-commerce
transactions.
5. Public-key cryptography: SSL certificates use public-key cryptography for secure key
exchange between the client and server. This allows the client and server to securely
exchange encryption keys, ensuring that the encrypted information can only be decrypted
by the intended recipient.
6. Session management: SSL certificates allow for the management of secure sessions,
allowing for the resumption of secure sessions after interruption. This helps to reduce the
overhead of establishing a new secure connection each time a user accesses a website or
service.
7. Certificates issued by trusted CAs: SSL certificates are issued by trusted CAs, who are
responsible for verifying the identity of the website or service before issuing the
certificate. This provides a high level of trust and assurance to users that the website or
service they are communicating with is authentic and trustworthy.

Secure Electronic Transaction (SET) Protocol:


Secure Electronic Transaction or SET is a system that ensures the security and integrity of
electronic transactions done using credit cards in a scenario.
SET is not some system that enables payment but it is a security protocol applied to those
payments. It uses different encryption and hashing techniques to secure payments over the
internet done through credit cards.
The SET protocol was supported in development by major organizations like Visa,
Mastercard, and Microsoft which provided its Secure Transaction Technology (STT), and
Netscape which provided the technology of Secure Socket Layer (SSL).
SET protocol restricts the revealing of credit card details to merchants thus keeping hackers
and thieves at bay. The SET protocol includes Certification Authorities for making use of
standard Digital Certificates like X.509 Certificate.
Before discussing SET further, let’s see a general scenario of electronic transactions, which
includes client, payment gateway, client financial institution, merchant, and merchant
financial institution.

Requirements in SET: The SET protocol has some requirements to meet, some of the
important requirements are:
 It has to provide mutual authentication i.e., customer (or cardholder) authentication by
confirming if the customer is an intended user or not, and merchant authentication.
 It has to keep the PI (Payment Information) and OI (Order Information) confidential by
appropriate encryptions.
 It has to be resistive against message modifications i.e., no changes should be allowed in
the content being transmitted.
 SET also needs to provide interoperability and make use of the best security mechanisms.
Participants in SET: In the general scenario of online transactions, SET includes similar
participants:
1. Cardholder – customer
2. Issuer – customer financial institution
3. Merchant
4. Acquirer – Merchant financial
5. Certificate authority – Authority that follows certain standards and issues certificates(like
X.509V3) to all other participants.
SET functionalities:
 Provide Authentication
 Merchant Authentication – To prevent theft, SET allows customers to check
previous relationships between merchants and financial institutions. Standard
X.509V3 certificates are used for this verification.
 Customer / Cardholder Authentication – SET checks if the use of a credit card
is done by an authorized user or not using X.509V3 certificates.
 Provide Message Confidentiality: Confidentiality refers to preventing unintended people
from reading the message being transferred. SET implements confidentiality by using
encryption techniques. Traditionally DES is used for encryption purposes.
 Provide Message Integrity: SET doesn’t allow message modification with the help of
signatures. Messages are protected against unauthorized modification using RSA digital
signatures with SHA-1 and some using HMAC with SHA-1,
Dual Signature: The dual signature is a concept introduced with SET, which aims at
connecting two information pieces meant for two different receivers :
Order Information (OI) for merchant
Payment Information (PI) for bank
You might think sending them separately is an easy and more secure way, but sending them
in a connected form resolves any future dispute possible. Here is the generation of dual
signature:

Purchase Request Generation: The process of purchase request generation requires three
inputs:
 Payment Information (PI)
 Dual Signature
 Order Information Message Digest (OIMD)
The purchase request is generated as follows:
Purchase Request Validation on Merchant Side: The Merchant verifies by comparing
POMD generated through PIMD hashing with POMD generated through decryption of Dual
Signature as follows:

Since we used Customer’s private key in encryption here we use KUC which is the public
key of the customer or cardholder for decryption ‘D’.
Payment Authorization and Payment Capture: Payment authorization as the name
suggests is the authorization of payment information by the merchant which ensures payment
will be received by the merchant. Payment capture is the process by which a merchant
receives payment which includes again generating some request blocks to gateway and
payment gateway in turn issues payment to the merchant.
The disadvantages of Secure Electronic Exchange: At the point when SET was first
presented in 1996 by the SET consortium (Visa, Master card, Microsoft, Verisign, and so
forth), being generally taken on inside the following couple of years was normal. Industry
specialists additionally anticipated that it would immediately turn into the key empowering
influence of worldwide internet business. Notwithstanding, this didn’t exactly occur because
of a few serious weaknesses in the convention.
The security properties of SET are better than SSL and the more current TLS, especially in
their capacity to forestall web based business extortion. Be that as it may, the greatest
downside of SET is its intricacy. SET requires the two clients and traders to introduce
extraordinary programming – – card perusers and advanced wallets – – implying that
exchange members needed to finish more jobs to carry out SET. This intricacy likewise
dialed back the speed of web based business exchanges. SSL and TLS don’t have such issues.
The above associated with PKI and the instatement and enlistment processes additionally
slowed down the far reaching reception of SET. Interoperability among SET items – – e.g.,
declaration interpretations and translations among entrusted outsiders with various
endorsement strategies – – was likewise a huge issue with SET, which likewise was tested by
unfortunate convenience and the weakness of PKI.

You might also like