510.lecture Module
510.lecture Module
practices
CIA concepts
Other Terminologies
Security Policy Implementation
Standards, Guidelines, and Procedures
Terms and Definitions
The Big Three: CIA concepts
These concepts represent the three fundamental
principles of information security
All of the information security controls and
safeguards and all of the threats, vulnerabilities,
and security processes are subject to the CIA
yardstick.
Confidentiality, Integrity and Availability.
The Big Three: CIA concepts
These concepts represent the three fundamental principles of information security
All of the information security controls and safeguards and all of the threats,
vulnerabilities, and security processes are subject to the CIA yardstick.
Confidentiality, Integrity and Availability.
The Big Three: CIA concepts...Cont
Confidentiality. The concept of confidentiality attempts to prevent the intentional
or unintentional unauthorized disclosure of a message’s contents.
Loss of confidentiality can occur in many ways, such as through the intentional
release of private company information or through a misapplication of network
rights.
The Big Three: CIA concepts..Cont
Integrity. The concept of integrity ensures that:
Modifications are not made to data by unauthorized personnel or processes.
Unauthorized modifications are not made to data by authorized personnel or
processes.
The data is internally and externally consistent;
The internal information is consistent among all sub-entities and that the internal
information is consistent with the real-world, external situation.
The Big Three: CIA concepts..Cont
Availability. The concept of availability ensures the reliable and timely access to
data or computing resources by the appropriate personnel.
Availability guarantees that the systems are up and running when needed.
In addition, this concept guarantees that the security services that the security
practitioner needs are in working order.
Other Terminologies
Identification. The means by which users claim their identities to a system.
This is most commonly used for access control, identification is necessary for
authentication and authorization.
Authentication. The testing or reconciliation of evidence of a user’s identity. It
establishes the user’s identity and ensures that the users are who they say they
are.
Other Terminologies...Cont
Accountability. A system’s capability to determine the actions and behaviours of
a single individual within a system, and to identify that particular individual. Audit
trails and logs support accountability.
Authorization. The rights and permissions granted to an individual (or process)
that enable access to a computer resource.
Once a user’s identity and authentication are established, authorization levels
determine the extent of system rights that an operator can hold.
Other Terminologies...Cont
Privacy. The level of confidentiality and privacy protection given to a user in a
system.
This is often an important component of security controls.
Privacy not only guarantees the fundamental tenet of confidentiality of a
company’s data, but also guarantees the data’s level of privacy, which is being
used by the operator.
Security Policy Implementation
A policy is a course of action written down to direct the operations of an
organization.
For example, there are security policies on firewalls, which refer to the access
control and routing list information.
A good, well-written policy is an essential and fundamental element of sound
security practice.
A policy, for example, can literally be a lifesaver during a disaster, or it might be a
requirement of a governmental or regulatory function.
Security Policy Implementation..Cont
A policy can also provide protection from liability due to an employee’s actions or
can form a basis for the control of trade secrets.
Standards. Standards specify the use of specific technologies in a uniform way.
This standardization of operating procedures can be a benefit to an organization
by specifying the uniform methodologies to be used for the security controls.
Standards are usually compulsory and are implemented throughout an
organization for uniformity.
Standards, Guidelines, and Procedures...Cont
Guidelines. Guidelines are similar to standards—they refer to the methodologies
of securing systems, but they are recommended actions only and are not
compulsory.
Guidelines are more flexible than standards and take into consideration the
varying nature of the information systems.
Guidelines can be used to specify the way standards should be developed.
Standards, Guidelines, and Procedures...Cont
Procedures. Procedures embody (Lays out) the detailed steps that are
followed to perform a specific task.
Procedures are the detailed actions that personnel must follow.
They are considered the lowest level in the policy chain.
Their purpose is to provide the detailed steps for implementing the policies,
standards, and guidelines previously created.
Terms and Definitions
Asset. An asset is a resource, process, product, computing infrastructure, and so
forth that an organization has determined must be protected.
The loss of the asset could affect confidentiality, integrity and availability.
It could also affect the full ability of an organization to continue in business.
The value of an asset is composed of all of the elements that are related to that
asset
its creation, development, support, replacement, public credibility, considered
costs, and ownership values.
Terms and Definitions
Threat is Simply the presence of any potential event that causes an undesirable impact
on the organization.
A threat could be man-made or natural and have a small or large effect on a company’s
security or viability.
Vulnerability. A presence of a weakness in the system that might be taken advantage of.
This could be an implementation or logical error in the system.
Think of a vulnerability as the threat that gets through a safeguard into the system.
A safeguard is the control or countermeasure employed to reduce the risk associate with a
specific threat or group of threats.
Chapter Two: Access control systems
Controlling access to information systems and associated networks is
necessary for the preservation of their confidentiality, integrity, and
availability.
Confidentiality assures that the information is not disclosed to unauthorized
persons or processes.
We address integrity through the following three goals:
1. Prevention of the modification of information by unauthorized users.
Access control systems...Cont
A) Internal consistency ensures that internal data is consistent.
For example, assume that an internal database holds the number of units of a
particular item in each department of an organization. The sum of the number of
units in each department should equal the total number of units that the database
has recorded internally for the whole organization.
Access control systems...Cont
B) External consistency ensures that the data stored in the database is consistent
with the real world. Using the example previously discussed in (a), external
consistency means that the number of items recorded in the database for each
department is equal to the number of items that physically exist in that department
Access control systems...Cont
Availability assures that a system’s authorized users have timely and
uninterrupted access to the information in the system.
Reliability, utility other related objectives flow from the organizational security
policy.
This policy is a high-level statement of management intent regarding the control
of access to information and the personnel who are authorized to receive that
information.
Access control systems...Cont
Three things that you must consider for the planning and implementation of
access control mechanisms are the threats to the system, the system’s
vulnerability to these threats, and the risk that the threat might materialize.
We further define these concepts as follows:
Threat. An event or activity that has the potential to breach or cause harm to the
information systems or networks.
Access control systems...Cont
Controls are implemented to mitigate risk and reduce the potential for loss.
Controls can be preventive, detective, or corrective.
Preventive controls are put in place to inhibit harmful occurrences.
Detective controls are established to discover harmful occurrences.
Corrective controls are used to restore systems that are victims of harmful
attacks. To implement these measures, controls can be administrative, logical or
technical, and physical.
Types of access control systems
Administrative controls include policies and procedures, security awareness
training, background checks, work habit checks, a review of vacation history,
and increased supervision.
Logical or technical controls involve the restriction of access to systems and
the protection of information. Examples of these types of controls are
encryption, smart cards, access control lists, and transmission protocols.
Physical controls incorporate guards and building security in general, such as
the locking of doors, the securing of server rooms or laptops, the protection of
cables, the separation of duties, and the backing up of files.
Access control systems...Cont
Controls provide accountability for individuals who are accessing sensitive
information. This accountability is accomplished through access control
mechanisms that require identification and authentication and through the audit
function. These controls must be in accordance with and accurately represent the
organization’s security policy
Models for Controlling Access
Mandatory Access Control. The authorization of a subject’s access to an object depends
upon levels, which indicate the subject’s clearance, and the classification or sensitivity of
the object.
For example, the military classifies documents as unclassified, confidential, secret, and top
secret.
Similarly, an individual can receive a clearance of confidential, secret, or top secret and can
have access to documents classified at or below his or her specified clearance level.
Thus, an individual with a clearance of “secret” can have access to secret and confidential
documents with a restriction.
This restriction is that the individual must have a need to know relative to the classified
documents involved.
Models for Controlling Access cont..
Therefore, the documents must be necessary for that individual to complete an
assigned task.
Even if the individual is cleared for a classification level of information, unless
there is a need to know the individual should not access the information.
Rule-based access control is a type of mandatory access control because rules
determine this access, rather than the identity of the subjects and objects alone.
Models Cont...
Discretionary Access Control. The subject has authority, within certain limitations, to
specify what objects are accessible.
For example, access control lists can be used. An access control list is one denoting
which users have what privileges to a particular resource.
For example, a tabular listing would show the subjects or users who have access to the
file and what privileges they have with respect to that file.
An access control triple consists of the user, program, and file with the corresponding
access privileges noted for each user.
This type of access control is used in local, dynamic situations where the subjects must
have the discretion to specify what resources certain users are permitted to access.
Models for Controlling Cont..
When a user within certain limitations has the right to alter the access control to
certain objects, this is termed as user-directed discretionary access control.
An identity-based access control is a type of discretionary access control based
on an individual’s identity.
In some instances, a hybrid approach is used, which combines the features of
user-based and identity-based discretionary access control.
Models for Controlling
Access... Cont
Non-Discretionary Access Control (role-based). A central authority determines
what subjects can have access to certain objects based on the organizational
security policy.
The access controls might be based on the individual’s role in the organization or
the subject’s responsibilities and duties.
In an organization where there are frequent personnel changes, non-discretionary
access control is useful because the access controls are based on the individual’s
role or title within the organization.
These access controls do not need to be changed whenever a new person takes
over that role.
Identification and
authentication
Identification and authentication are the keystones of most access control systems.
Identification is the act of a user professing an identity to a system, usually in the form of a
logon ID to the system.
Identification establishes user accountability for the actions on the system.
Authentication is verification that the user’s claimed identity is valid, and is usually
implemented through a user password at logon time.
Authentication is based on the following three factor types:
Type 1. Something you know, such as a personal identification number (PIN) or password
Type 2. Something you have, such as an ATM card or smart card
Type 3. Something you are (physically), such as a fingerprint or retina scan
Identification and
authentication...Cont
Sometimes a fourth factor, something you do, is added to this list.
Something you do might be typing your name or other phrases on a keyboard.
Conversely, something you do can be considered something you are.
Two-Factor Authentication refers to the act of requiring two of the three factors to
be used in the authentication process.
For example, withdrawing funds from an ATM machine requires a two-factor
authentication in the form of the ATM card (something you have) and a PIN
number (something you know).
Passwords
Passwords can be compromised and must be protected.
In the ideal case, a password should only be used once.
This “one-time password” provides maximum Access Control Systems security because a new
password is required for each new logon.
A password that is the same for each logon is called a static password.
A password that changes with each logon is termed a dynamic password.
The changing of passwords can also fall between these two extremes.
Passwords can be required to change monthly, quarterly, or at other intervals, depending on the
criticality of the information needing protection and the password’s frequency of use.
Passwords...Cont
Obviously, the more times a password is used, the more chance there is of it
being compromised.
A passphrase is a sequence of characters that is usually longer than the allotted
number for a password.
The passphrase is converted into a virtual password by the system.
In all these schemes, a front-end authentication device and a back-end
authentication server, which services multiple workstations or the host, can
perform the authentication.
Biometrics
An alternative to using passwords for authentication in logical or technical access
control is biometrics.
Biometrics are based on the Type 3 authentication mechanism: something you are.
Biometrics are defined as an automated means of identifying or authenticating the
identity of a living person based on physiological or behavioural characteristics.
In biometrics, identification is a “one-to-many” search of an individual’s
characteristics from a database of stored images.
Authentication in biometrics is a “one-to-one” search to verify a claim to an identity
made by a person.
Performance measures
False Rejection Rate or Type I Error is the percentage of valid subjects that are
falsely rejected.
False Acceptance Rate or Type II Error is the percentage of invalid subjects that are
falsely accepted.
Crossover Error Rate or type III Error is the percent in which the False Rejection
Rate equals the FAR
Almost all types of detection permit a system’s sensitivity to be increased or
decreased during an inspection process.
If the system’s sensitivity is increased, such as in an airport metal detector, the
system becomes increasingly selective and has a higher False Rejection.
Measures...Cont
Conversely, if the sensitivity is decreased, the FAR will increase.
Thus, to have a valid measure of the system performance, the CER is used.
In addition to the accuracy of the biometric systems, there are other factors that
must also be considered.
These factors include the enrolment time, the throughput rate, and acceptability.
Enrolment time is the time that it takes to initially “register” with a system by
providing samples of the biometric characteristic to be evaluated.
An acceptable enrolment time is around two minutes.
Biometrics...Cont
For example, in fingerprint systems the actual fingerprint is stored and requires
approximately 250KB per finger for a high-quality image.
This level of information is required for one-to-many searches in forensics
applications on very large databases.
In finger-scan technology, a full fingerprint is not stored; rather, the features
extracted from this fingerprint are stored by using a small template that requires
approximately 500 to 1,000 bytes of storage.
Finger-scan technology is used for one-to-one verification by using smaller
databases.
Chapter Four: Network security
An Intrusion Detection System (IDS) is a system that monitors network traffic or
monitors host audit logs in order to determine whether any violations of an
organization’s security policy have taken place.
An IDS can detect intrusions that have circumvented or passed through a firewall or
that are occurring within the local area network behind the firewall. A truly effective IDS
will detect common attacks as they occur, which includes distributed attacks.
This type of IDS is called a network-based IDS because it monitors network traffic in
real time. Conversely, a host-based IDS resides on centralized hosts.
Network intrusions
A network-based IDS usually provides reliable, real-time information without consuming
network or host resources.
A network-based IDS is passive when acquiring data. Because a network-based IDS
reviews packets and headers, it can also detect denial of service attacks.
Furthermore, because this IDS is monitoring an attack in real time, it can also respond
to an attack in progress to limit damage.
A problem with a network-based IDS is that it will not detect attacks against a host
made by an intruder who is logged in at the host’s terminal.
If a network IDS along with some additional support mechanism determines that an
attack is being mounted against a host, it is usually not capable of determining the type
or effectiveness of the attack being launched.
Host intrusions
A host-based IDS can review the system and event logs in order to detect an
attack on the host and to determine whether the attack was successful.
It is also easier to respond to an attack from the host.
Detection capabilities of host-based ID systems are limited by the incompleteness
of most host audit log capabilities.
Statistical Anomaly IDS
With this method, an IDS acquires data and defines a “normal” usage profile for
the network or host that is being monitored.
This characterization is accomplished by taking statistical samples of the system
over a period of normal use.
Typical characterization is used to establish a normal profile includes memory
usage, CPU utilization, and network packet types.
With this approach, new attacks can be detected because they produce abnormal
system statistics.
Some disadvantages of a statistical anomaly-based ID are that it will not detect an
attack that does not significantly change the system operating characteristics, or it
might falsely detect a non-attack event that had caused a momentary anomaly in
the system.
Signature based IDS
In a signature-based IDS, signatures or attributes, which characterize an attack,
are stored in the database for reference.
Then, when data about events are acquired from host audit logs or from network
packet monitoring, this data is compared with the attack signature database.
If there is a match, a response is initiated. A weakness of this approach is the
failure to characterize slow attacks that extend over a long time period.
To identify these types of attacks, large amounts of information must be held for
extended time periods. Another issue with signature-based IDs is that only attack
signatures that are stored in their database are detected
Statistical Anomaly IDS
With this method, an IDS acquires data and defines a “normal” usage profile for
the network or host that is being monitored.
This characterization is accomplished by taking statistical samples of the system
over a period of normal use.
Typical characterization is used to establish a normal profile includes memory
usage, CPU utilization, and network packet types.
With this approach, new attacks can be detected because they produce abnormal
system statistics.
Some disadvantages of a statistical anomaly-based ID are that it will not detect an
attack that does not significantly change the system operating characteristics, or it
might falsely detect a non-attack event that had caused a momentary anomaly in
the system.
Network vs Anomaly IDS
(a) Network-based ID systems: Commonly reside on a discrete network segment
and monitor the traffic on that network segment. It usually consists of a network
appliance with a Network Interface Card that is operating in promiscuous mode
and is intercepting and analysing the network packets in real time.
(b) Host-based ID systems use small intelligent program agents, which reside on a
host computer, and monitor the operating system continually as well as write to log
files and trigger alarms
■It detect inappropriate activity only on the host computer—they do not monitor the
entire network segment.
Knowledge vs Behaviour IDS
The two current conceptual approaches to Intrusion Detection methodology are
knowledge and behavioural systems, sometimes referred to as signature and
statistical anomaly-based ID, respectively.
Knowledge-based ID Systems use a database of previous attacks and known
system vulnerabilities to look for current attempts to exploit their vulnerabilities,
and trigger an alarm if an attempt is found. These systems are more common than
behaviour-based.
The advantages are; low false alarm rates (or positives).
■■ Their alarms are standardized and are clearly understood by security
personnel.
The disadvantage is; resource-intensive: the knowledge database continually needs
maintenance and updates and New, unique, or original attacks often go unnoticed.
Network attacks
(a) Denial of Service attack are types that create service outages due to the
saturation of networked resources. This saturation can be aimed at the network
devices, servers, or infrastructure bandwidth.
For example, Distributed Denial of Service (DDoS) attack that occurred in
February 2000 is not specifically considered a hack because the attack’s primary
goal was not to gather information but rather to halt service by overloading the
system.
This attack, however, can be used as a diversion to enable an intentional hack to
gain information from a different part of the system by diverting the company’s
Information resources elsewhere.
Network attacks cont..
(b) Network Intrusions refers to the use of unauthorized access to break into a network
primarily from an external source.
Unlike a login abuse attack, the intruders are not considered to be known to the
company. Also known as a penetration attack, it exploits known security vulnerabilities
in the security perimeter.
(c) Spoofing refers to an attacker deliberately inducing a user or device into taking an
incorrect action by giving it incorrect information.
(d) Piggy-backing refers to an attacker gaining unauthorized access to a system by
using a legitimate user’s connection. A user leaves a session open or incorrectly logs
off, enabling an attacker to resume the session.
Network security
(a) Packet Filtering Firewalls: Is the packet filtering firewall, which we can also call a
screening router.
This firewall examines the source and destination address of the incoming data packet.
The firewall either blocks or passes the packet to its intended destination network,
which is usually the local network segment where it resides.
(b) Access Control Lists.
This is a database file that reside on the firewall, maintained by the firewall
administrator, and tell the firewall specifically which packets can and cannot be
forwarded to certain addresses.
The firewall enable access for only authorized application port or service numbers.
Network security cont..
(c) Application level firewall is commonly a host computer that is running proxy server
software, which makes it a Proxy Server.
This firewall works by transferring a copy of each accepted data packet from one
network to another, thereby masking the data’s origin.
This can control which services a workstation uses (FTP and so on), and it also aids in
protecting the network from outsiders who may be trying to get information about the
network’s design.
Network Security cont..
(d) A Virtual Private Network is created by building a secure communications link
between two nodes by using a secret encapsulation method.
This link is a secure encrypted tunnel, or encapsulated tunnel because encryption may
or may not be used. This tunnel can be created by the following:
■■ Installing software or hardware agents on the client or a network gateway
■■ Implementing various user or node authentication systems
■■ Implementing key and certificate exchange systems
A VPN protocols
(a) Layer 2 Tunnelling Protocol is a combination of PPTP and the earlier Layer 2
Forwarding Protocol that works at the Data Link Layer.
This standard was designed for single point-to-point client to server connections.
Note that multiple protocols can be encapsulated within the L2TP tunnel, but do not use
encryption.
(b) IPSec operates at the Network Layer and it enables multiple and simultaneous
tunnels, unlike the single connection of the previous standards.
IPSec has the functionality to encrypt and authenticate IP data. It is built into the IP v6.
IPSec focuses more on network-to-network connectivity.
VPN protocols
(c) Point-to-Point Tunnelling Protocol works at the Data Link Layer of the OSI model.
Designed for individual client to server connections, it enables only a single point-to-
point connection per session.
This standard is very common with asynchronous connections.
Chapter 5: Cryptography
The purpose of cryptography is to protect transmitted information from being read and
understood by anyone except the intended recipient.
In the ideal sense, unauthorized individuals can never read an enciphered message.
In practice, reading an enciphered communication can be a function of time, the effort
and corresponding time, which is required for an unauthorized individual to decipher an
encrypted message.
By the time the message is decrypted, the information within the message may be of
minimal value .
Definition of terminologies
The cipher is a cryptographic transformation that operates on characters or bits.
Cryptographic Algorithm is a step-by-step procedure used to encipher plaintext and
decipher cipher-text .
Cryptography. The art and science of hiding the meaning of a communication from
unintended recipients.
A block cipher is an encryption method that has a predefined block of text. For example,
a cipher, AES, encrypts 128 bit blocks with a key of predetermined length.
Stream is a type of an encryption method that encrypts the text bit by bit at a time. Both
stream and block cipher fall under symmetric key encryption.
Cryptanalysis is the act of obtaining the plaintext or key from the cipher-text that is used
to obtain valuable information to pass on altered or fake messages in order to deceive
the original intended recipient; breaking the cipher-text.
Approaches to Encryption
The Caesar Cipher Substitution is a simple substitution cipher that involves shifting the
alphabet three positions to the right.
Here, the message’s characters and repetitions of the key are added together, to a
modulo of 26. The letters A to Z of the alphabet are given a value of 0 to 25,
respectively. D, is the number of repeating letters representing the key K.
In the following example the key, D = 3 and K = BAD.
The message is: ATTACK AT DAWN
Assigning numerical values to the message yields
0 19 19 0 2 10 0 19 3 0 22 13
ATTAC KAT DAW N
The numerical values of K are
103
BAD
Types of Encryption
In symmetric key (private key), both the receiver and sender share a common secret
key. For instance if the sender generates “Password”, of an equivalent key algorithm,
the same key will be needed by the receiver to decipher the message.
In asymmetric (public key) key cryptography, the sender and receiver respectively
share a public and private key.
The public and private keys are related mathematically, and in an ideal case, they have
the characteristic where an individual who has the public key cannot derive the private
key.
In this type, there is an intermittent authentication server to verify the sender’s public
key in synchronization with their decrypting private key.
Symmetric key encryption
In symmetric key encryption the same key is used between the sender and the receiver,
however, the challenge lies in sending the key between them.
This encryption is applied in payment applications, validation to ensure the sender is
who they claim to be, and finally in hashing. The reason for these application lies in the
fact that the system is faster than asymmetric encryption.
The system is a one way function, that entails that the function cannot be understood in
the reverse.
This is more important especially so as not to allow the crypto analysts who might be
eavesdropping to understand the key used by the sender.
Symmetric -Stream algorithms
There are essentially two symmetric encryption algorithms, these include stream and
block algorithms.
(a) Stream algorithms encrypts data in streams of bits instead of being retained in the
system’s memory. Some examples of this algorithm include RC4
For instance, in order to encrypt the clear text “Yes I will come”, we will add a key in a
stream of bits to it in order to come up with a cipher text.
The text will have to be looked at from its binary representation and then add the key
generated by the sender, which will be relied upon by the user too. {01010101} +
{11110010}
First confuse third party, by beginning with the Caesar cipher of shifting characters to
three positions using the modulo of 26 representing A to Z respectively starting with 0.
In this system the exclusive or gate (XOR) is mainly applied to the text message to
generate the key that the receiver will rely on to decrypt the message.
Symmetric -Stream algorithms
The exclusive or gate is a mod2 where the same input will equate to a 1; That is
0 + 0 = 1, 1 + 1 = 1
The exclusive or gate will result in a 0 with two different inputs, that is to say;
0 + 1 = 0, 1 + 0 = 0
Worked example; A sender wants to send an encrypted text “Yes” in a stream cipher;
Plain text = “Yes” We first convert it to the equivalent numeric form in the modulo 26.
We will add an encryption key “con” represented as {00011101 11101100 10010011}
key stream to the text message.
Yes = “BHV” is the key in after the 3 shifts. Which will be represented as [2 8 21] in its
modulo 26 numerical equivalent
We add an exclusive or gate to the cipher text “2 8 21” from its binary representation we
will get; {01101101 11100101 00110101} + {00011101 11101100 10010011} =
{100011011 11110110}
Symmetric -Stream algorithm
The symmetric key cryptography therefore is still applicable in information security as
the algorithm is simple to compute. Additionally the algorithm is easy to tell whether it is
broken or not not as opposed to the asymmetric cryptography.
There is a challenge in deriving a key from this type of cryptography which lies in the
one way function. This means that in symmetric key cryptography, the key lies in
guessing the many possibilities of what the input constituted to produce a particular
outcome.
For instance, what was the input to produce a 1 after applying an XOR function, and
what was the initial key stream to the function. This one way function is where the
security symmetric key cryptography hinges as it increases randomness.
This in information security is referred to as a hard problem, as reversing the function
will have to run into possible many guesses.
Symmetric -Block algorithm
The block ciphers are part and parcel of the symmetric key cryptography. In this type
we do not use a stream of bit characters but rather use blocks. Blocks can either be 64
bits in length exemplified by DES or 128 bit for AES.
DES was proposed in 1974 and it was mainly used in GSM telecommunication systems
by IBM. It was abandoned in 1997 and an AES was adopted instead as there was a
need to counter the weaknesses observed in DES.
AES is the most commonly used encryption algorithms since 1977. It was adopted in
2000 by the National Institute of standards due to the complexity in the security.
The encryption algorithm has fixed block sizes of 128, 192 and 256 bits. The highest
number in the algorithm represents a secure AES system to crack.
The higher the number of blocks, the more secure the encryption algorithm and the
more rounds; instance, 128 bits uses 10, 192 uses 12 and 256 bits uses 14 rounds.
The rounds are done 4 times to increase randomness of the encryption.
Symmetric -Block rounding
(1) First round is confusion. Done by, byte substitution based on the key.
(2) Second round is permutation (transposition). It means performing row shifts so that
one byte‘s encryption has an effect on the other bytes. It does byte reshuffling. For set
X = {a,b,c}, reshuffling this set would get = {b,c,a} in a row.
(3) Third round is diffusion. It involves column mix. Change the character of plain text,
then several characters of the cipher text changes drastically and vice versa.
(4) Fourth round is Key addition. A subkey will be generated from the main key and the
process is repeated 9 times, 11 times and 13 times in the (128, 192 and 256) bits
respectively.
The first three principles in block cipher are the ideas upon which hashing hinges in
coming up with a completely different hash characters even with a minimal difference in
the plain text. For instance hashing “Hello world!” will produce a different hash from
“Hello world.”
Hash Function
A hash is a one way function that takes unlimited input length of clear text characters
with a limited fixed size output hash characters that can not be easily guessed.
This definition entails that its easy to make the calculation forward of the function but it
is mathematically unfeasible to calculate it backwards.
The probability of backward calculation in functions lies in rounds that are done in block
cipher algorithms that increases randomness in the outcome.
Hash functions are helpful as they preserve data integrity, that is any changes made to
the hashed data will result in a different hash function as a whole. In computer forensic
hash functions are used to preserve digital evidence for the investigators.
In information security, databases store sensitive information such as passwords in a
hash value and it is upon this value that access is granted and/or denied depending on
the input of the user.
Hash Function Examples
The best examples of the commonly used hash functions has been for the purpose of
securing input texts like in MD5 developed in 1992.
This function has the fixed length of 32 bit output with unlimited input, primarily used in
carrying out checksum on data. There were so many predictions in the MD5 functions
with regard to the output of particular input and was thus considered broken.
The secure Hash Algorithm (SHA-1) was developed in 1993 with a fixed length of 168
bits for the purpose of data verification, but also had collisions and was thus broken.
SHA-2 was developed in 2001 as an improvement with variant fixed length output of
256 and 512 bits, with the highest number of bits considered more secure. It is still a
secure function and useful up to today.
Ripe MD is another function first developed in 1996 but was later broken and thus Ripe
MD 160 was later developed. This algorithm is still considered secure and used in
bitcoin cryptocurrency systems.
Asymmetric encryption algorithms
Asymmetric key encryption is one where a pair of keys are used to a total of four keys
instead of one key. The sender therefore has two pairs of keys; a public and a private
key, the receiver also has his/ her private and public keys.
This encryption does not involve sending a key to the receiver owing to an independent
authentication server involved in the exchange of keys. Examples of asymmetric key
encryption includes RSA.
This algorithm is so far considered the strongest because it derives its security from
computational difficulty of factoring large integers that are a product two large prime
numbers.
Multiplying two primes is easy but the difficulty lies in determining the original numbers
used from the product.
The algorithm is in short a one way algorithm lying between 1024 to 2048
History of asymmetric cryptography
This encryption systems was made as a result of trying to curb the inefficiencies that
exists in the sharing of the keys between the sender and the receiver.
Diffie Whitefield and Hellman Martin from Stanford University published asymmetric
encryption in 1977. The algorithms used to raise prime numbers to specific powers to
produce decryption keys.
RSA is widely used in SSL/TSL protocols which is used to provide communication over
the computer network. This algorithm was based on Diffie-Hellman’s algorithm.
The asymmetric algorithm was made to try and find a way to communicate publicly
between the sender and the receiver without revealing the exchange key in use.
Diffie-Hellman algorithm
The Diffie-Hellman is an example of perfect forward secrecy security algorithm. In this
algorithm the third party should not see the plain text; that is the encryption and the
decryption key.
The Diffie-Hellman uses prime numbers and these are at least 256 bits in length.
In an event that Alice (A) wants to communicate with Bob (B), they should first agree on
the modular to use between them.
Alice (A) will pick any prime number and Bob (B) will pick his own; then they will raise
these prime numbers to their agreed upon modular.
The prime numbers from which they can pick range from 1 to 1024 key space.
Diffie-Hellman worked example
First the two parties have to agree on a modular. Suppose 3mod5
Alice (A) and Bob (B) will each pick a prime number between 1 to 1024 keyspace.
Suppose A = 5, B = 7.
Alice has a public key of 5 while Bob has a public key of 7, with which each will raise to
the power of the modular function 3mod5
3(5)mod5 32mod5 32/5 = 3. The 3 is therefore a cipher text
3(7)mod5 2,187mod5 2,187/5 = 2. The 2 is the a cipher text from Bob
If Alice wants to send a message to Bob, she will use his public key together with her
private key to send an encrypted message, Bob will then use his private key to decrypt
the message in his inbox. This is referred to as a digital signature and it provides a
proof of a transaction (verification)
Its easy to derive the public key, from the private key but mathmatically impossible to
move the opposite direction, because encryption uses a one way function.
Diffie-Hellman worked example
First the two parties have to agree on a modular. Suppose 3mod5
Alice (A) and Bob (B) will each pick a prime number between 1 to 1024 keyspace.
Suppose A = 5, B = 7.
Alice has a public key of 5 while Bob has a public key of 7, with which each will raise to
the power of the modular function 3mod5
3(5)mod5 32mod5 32/5 = 3. The 3 is therefore a cipher text
3(7)mod5 2,187mod5 2,187/5 = 2. The 2 is the a cipher text from Bob
If Alice wants to send a message to Bob, she will use his public key together with her
private key to send an encrypted message, Bob will then use his private key to decrypt
the message in his inbox. This is referred to as a digital signature and it provides a
proof of a transaction (verification)
Its easy to derive the public key, from the private key but mathmatically impossible to
move the opposite direction, because encryption uses a one way function.
Thank You