0% found this document useful (0 votes)
60 views

Distributed System Notes

The document discusses replication in distributed systems. It defines replication as keeping multiple copies of data in different places to improve availability. There are two main types of replication - active and passive. Active replication involves sending requests to all replicas to process in parallel, while passive replication uses one primary replica and backup replicas that only process requests if the primary fails. Replication improves fault tolerance, load balancing, and availability but also increases resource usage and processing time.

Uploaded by

aryan
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
60 views

Distributed System Notes

The document discusses replication in distributed systems. It defines replication as keeping multiple copies of data in different places to improve availability. There are two main types of replication - active and passive. Active replication involves sending requests to all replicas to process in parallel, while passive replication uses one primary replica and backup replicas that only process requests if the primary fails. Replication improves fault tolerance, load balancing, and availability but also increases resource usage and processing time.

Uploaded by

aryan
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 24

UNIT 3

What is Replication in Distributed System?


In a distributed system data is stored is over different computers in a network. Therefore, we
need to make sure that data is readily available for the users. Availability of the data is an
important factor often accomplished by data replication. Replication is the practice of keeping
several copies of data in different places.

Why do we require replication?

The first and foremost thing is that it makes our system more stable because of node replication.
It is good to have replicas of a node in a network due to following reasons:

 If a node stops working, the distributed network will still work fine due to its replicas which will
be there. Thus it increases the fault tolerance of the system.
 It also helps in load sharing where loads on a server are shared among different replicas.
 It enhances the availability of the data. If the replicas are created and data is stored near to the
consumers, it would be easier and faster to fetch data.

Types of Replication

 Active Replication
 Passive Replication

Active Replication:  

 The request of the client goes to all the replicas.


 It is to be made sure that every replica receives the client request in the same order else the
system will get inconsistent.
 There is no need for coordination because each copy processes the same request in the same
sequence.
 All replicas respond to the client’s request.

Advantages:

 It is really simple. The codes in active replication are the same throughout.
 It is transparent.
 Even if a node fails, it will be easily handled by replicas of that node.

Disadvantages:

 It increases resource consumption. The greater the number of replicas, the greater the memory
needed.
 It increases the time complexity. If some change is done on one replica it should also be done in
all others.

Passive Replication: 

 The client request goes to the primary replica, also called the main replica.
 There are more replicas that act as backup for the primary replica.
 Primary replica informs all other backup replicas about any modification done.
 The response is returned to the client by a primary replica.
 Periodically primary replica sends some signal to backup replicas to let them know that it is
working perfectly fine.
 In case of failure of a primary replica, a backup replica becomes the primary replica.

Advantages:

 The resource consumption is less as backup servers only come into play when the primary server
fails.
 The time complexity of this is also less as there’s no need for updating in all the nodes replicas,
unlike active replication.

Disadvantages:

 If some failure occurs, the response time is delayed.

 To reduce access time of data caching is used. The effect of replication and caching
increases complexity and overhead of consistency management.
 Different Consistency models offer different degrees of consistency.
 Depending on the order of operations allowed and how much inconsistency can be
tolerated, so consistency models range from strong to weak.
 Data centric model:
 In this model it is guarantee that the results of read and write operations which is
performed on data can be replicated to various stores located nearby immediately.


 Fig(a) Data Centric model

 Client centric model:


 These consistency model do not handle simultaneous updates.
 But, to maintain a consistent view for the individual client process to access different
replicas from different locations has been carried out.


 Fig (b) Client centric model

 Data centric consistency models explanation

 Strict consistency:
 It is the strongest data centric consistency model as it requires that a write on a data be
immediately available to all replicas.
 This model states that “Any read on data item x returns a value corresponding to the
result of the most recent write on x.
 In a distributed system, it is very difficult to maintain a global time ordering, along with
the availability of data which is processed concurrently at different locations, which
causes delays for transferring of data from one location to another.
 Thus Strict consistency model is impossible to achieve.
 Eg P1 & P2 are two processes. P1 performs a write operation on its data item x and
modifies its value to a.
 Update is propogated to all other replicas
 Suppose if P2 now reads and finds its value to be NIL due to the propogation delay.
 So the result of this value read by P2 is not strictly consistent.
 But as the law of Strictly consistent model says that results should be immediately
propogated to all the replicas as shown in figure below



 Client centric model explanation
 Monotonic Reads:
 It states that “If a process P reads the value of a data item t, any successive read operation
on x by that process at later time will always return that same or a recent value.
 For eg Each time one connects to the email server (may be different replicas),the email
server guarantees that all the time will always return that same or recent value.


 Here, in fig (a) operations WS(x1) at L1 before the second read operation. Thus
WS(x1,x2) at L2 would see the earlier update.

Consistency Model

 A consistency model is contract between a distributed data store and processes, in which the
processes agree to obey certain rules in contrast the store promises to work correctly.
 A consistency model basically refers to the degree of consistency that should be maintained for
the shared memory data.
 If a system supports the stronger consistency model, then the weaker consistency model is
automatically supported but the converse is not true.
 The types of consistency models are Data-Centric and client centric consistency models.

1.Data-Centric Consistency Models

A data store may be physically distributed across multiple machines. Each process that can
access data from the store is assumed to have a local or nearby copy available of the entire store.

i.Strict Consistency model

 Any read on a data item X returns a value corresponding to the result of the most recent write
on X
 This is the strongest form of memory coherence which has the most stringent consistency
requirement.
 Strict consistency is the ideal model but it is impossible to implement in a distributed system. It
is based on absolute global time or a global agreement on commitment of changes.

ii.Sequential Consistency

 Sequential consistency is an important data-centric consistency model which is a slightly weaker


consistency model than strict consistency.
 A data store is said to be sequentially consistent if the result of any execution is the same as if
the (read and write) operations by all processes on the data store were executed in some
sequential order and the operations of each individual process should appear in this sequence in
a specified order.
 Example: Assume three operations read(R1), write(W1), read(R2) performed in an order on a
memory address. Then (R1,W1,R2),(R1,R2,W1),(W1,R1,R2)(R2,W1,R1) are acceptable provided
all processes see the same ordering.

iii.Linearizability

 It that is weaker than strict consistency, but stronger than sequential consistency.
 A data store is said to be linearizable when each operation is timestamped and the result of any
execution is the same as if the (read and write) operations by all processes on the data store
were executed in some sequential order
 The operations of each individual process appear in sequence order specified by its program.
 If tsOP1(x)< tsOP2(y), then operation OP1(x) should precede OP2(y) in this sequence.

iv.Causal Consistency

 It is a weaker model than sequential consistency.


 In Casual Consistency all processes see only those memory reference operations in the correct
order that are potentially causally related.
 Memory reference operations which are not related may be seen by different processes in
different order.
 A memory reference operation is said to be casually related to another memory reference
operation if the first operation is influenced by the second operation.
 If a write(w2) operation is casually related to another write (w1) the acceptable order is (w1,
w2).

v.FIFO Consistency

 It is weaker than causal consistency.


 This model ensures that all write operations performed by a single process are seen by all other
processes in the order in which they were performed like a single process in a pipeline.
 This model is simple and easy to implement having good performance because processes are
ready in the pipeline.
 Implementation is done by sequencing write operations performed at each node independently
of the operations performed on other nodes.
 Example: If (w11) and (w12) are write operations performed by p1 in that order and (w21),(w22)
by p2. A process p3 can see them as [(w11,w12),(w21,w2)] while p4 can view them as
[(w21,w2),(w11,w12)].

vi.Weak consistency

 The basic idea behind the weak consistency model is enforcing consistency on a group of
memory reference operations rather than individual operations.
 A Distributed Shared Memory system that supports the weak consistency model uses a special
variable called a synchronization variable which is used to synchronize memory.
 When a process accesses a synchronization variable, the entire memory is synchronized by
making visible the changes made to the memory to all other processes.

vii.Release Consistency

 Release consistency model tells whether a process is entering or exiting from a critical section so
that the system performs either of the operations when a synchronization variable is accessed
by a process.
 Two synchronization variables acquire and release are used instead of single synchronization
variable. Acquire is used when process enters critical section and release is when it exits a
critical section.
 Release consistency can be viewed as synchronization mechanism based on barriers instead of
critical sections.

viii.Entry Consistency

 In entry consistency every shared data item is associated with a synchronization variable.
 In order to access consistent data, each synchronization variable must be explicitly acquired.
 Release consistency affects all shared data but entry consistency affects only those shared data
associated with a synchronization variable.

2.Client-Centric Consistency Models

 Client-centric consistency models aim at providing a system wide view on a data store.
 This model concentrates on consistency from the perspective of a single mobile client.
 Client-centric consistency models are generally used for applications that lack simultaneous
updates were most operations involve reading data.

i.Eventual Consistency

 In Systems that tolerate high degree of inconsistency, if no updates take place for a long time all
replicas will gradually and eventually become consistent. This form of consistency is called
eventual consistency.
 Eventual consistency only requires those updates that guarantee propagation to all replicas.
 Eventual consistent data stores work fine as long as clients always access the same replica.
 Write conflicts are often relatively easy to solve when assuming that only a small group of
processes can perform updates. Eventual consistency is therefore often cheap to implement.

ii.Monotonic Reads Consistency

 A data store is said to provide monotonic-read consistency if a process reads the value of a data
item x, any successive read operation on x by that process will always return that same value or
a more recent value.
 A process has seen a value of x at time t, it will never see an older version of x at a later time.
 Example: A user can read incoming mail while moving. Each time the user connects to a
different e-mail server, that server fetches all the updates from the server that the user
previously visited. Monotonic Reads guarantees that the user sees all updates, no matter from
which server the automatic reading takes place.

iii.Monotonic Writes

 A data store is said to be monotonic-write consistent if a write operation by a process on a data


item x is completed before any successive write operation on X by the same process.
 A write operation on a copy of data item x is performed only if that copy has been brought up to
date by means of any preceding write operations, which may have taken place on other copies
of x.
 Example: Monotonic-write consistency guarantees that if an update is performed on a copy of
Server S, all preceding updates will be performed first. The resulting server will then indeed
become the most recent version and will include all updates that have led to previous versions
of the server.

iv.Read Your Writes

 A data store is said to provide read-your-writes consistency if the effect of a write operation by a
process on data item x will always be a successive read operation on x by the same process.
 A write operation is always completed before a successive read operation by the same process
no matter where that read operation takes place.
 Example: Updating a Web page and guaranteeing that the Web browser shows the newest
version instead of its cached copy.

v.Writes Follow Reads

 A data store is said to provide writes-follow-reads consistency if a process has write operation
on a data item x following a previous read operation on x then it is guaranteed to take place on
the same or a more recent value of x that was read.
 Any successive write operation by a process on a data item x will be performed on a copy of x
that is up to date with the value most recently read by that process.
 Example: Suppose a user first reads an article A then posts a response B. By requiring writes-
follow-reads consistency, B will be written to any copy only after A has been written.

Fault-tolerance Techniques in Computer


System
Fault-tolerance is the process of working of a system in a proper way in spite of the occurrence
of the failures in the system. Even after performing the so many testing processes there is
possibility of failure in system. Practically a system can’t be made entirely error free. hence,
systems are designed in such a way that in case of error availability and failure, system does the
work properly and given correct result.

Any system has two major components – Hardware and Software. Fault may occur in either of it.
So there are separate techniques for fault-tolerance in both hardware and software.
Hardware Fault-tolerance Techniques:
Making a hardware fault-tolerance is simple as compared to software. Fault-tolerance techniques
make the hardware work proper and give correct result even some fault occurs in the hardware
part of the system. There are basically two techniques used for hardware fault-tolerance:

1. BIST –
BIST stands for Build in Self Test. System carries out the test of itself after a certain
period of time again and again, that is BIST technique for hardware fault-tolerance.
When system detects a fault, it switches out the faulty component and switches in the
redundant of it. System basically reconfigure itself in case of fault occurrence.
2. TMR –
TMR is Triple Modular Redundancy. Three redundant copies of critical components are
generated and all these three copies are run concurrently. Voting of result of all redundant
copies are done and majority result is selected. It can tolerate the occurrence of a single
fault at a time.

Software Fault-tolerance Techniques:


Software fault-tolerance techniques are used to make the software reliable in the condition of
fault occurrence and failure. There are three techniques used in software fault-tolerance. First
two techniques are common and are basically an adaptation of hardware fault-tolerance
techniques.

1. N-version Programming –
In N-version programming, N versions of software are developed by N individuals or
groups of developers. N-version programming is just like TMR in hardware fault-
tolerance technique. In N-version programming, all the redundant copies are run
concurrently and result obtained is different from each processing. The idea of n-version
programming is basically to get the all errors during development only.

2. Recovery Blocks –
Recovery blocks technique is also kike the n-version programming but in recovery blocks
technique, redundant copies are generated using different algorithms only. In recovery
block, all the redundant copies are not run concurrently and these copies are run one by
one. Recovery block technique can only be used where the task deadlines are more than
task computation time.

3. Check-pointing and Rollback Recovery –


This technique is different from above two techniques of software fault-tolerance. In this
technique, system is tested each time when we perform some computation. This
techniques is basically useful when there is processor failure or data corruption.
Recovery in Distributed Systems
Pre-requisites: Distributed System

Recovery from an error is essential to fault tolerance, and error is a component of a system that
could result in failure.  The whole idea of error recovery is to replace an erroneous state with an
error-free state. Error recovery can be broadly divided into two categories. 

1.  Backward Recovery:

Moving the system from its current state back into a formerly accurate condition from an
incorrect one is the main challenge in backward recovery. It will be required to accomplish this
by periodically recording the system’s state and restoring it when something goes wrong. A
checkpoint is deemed to have been reached each time (part of) the system’s current state is
noted.

2.  Forward Recovery:

Instead of returning the system to a previous, checkpointed state in this instance when it has
entered an incorrect state, an effort is made to place the system in a correct new state from which
it can continue to operate. The fundamental issue with forward error recovery techniques is that
potential errors must be anticipated in advance. Only then is it feasible to change those mistakes
and transfer to a new state.

These two types of possible recoveries are done in fault tolerance in distributed system.

Stable Storage :

Stable storage, which can resist anything but major disasters like floods and earthquakes, is
another option. A pair of regular discs can be used to implement stable storage. Each block on
drive 2 is a duplicate of the corresponding block on drive 1, with no differences. The block on
drive 1 is updated and confirmed first whenever a block is updated. then the identical block on
drive 2 is finished.
Suppose that the system crashes after drive 1 is updated but before the update on drive 2. Upon
recovery, the disk can be compared with blocks. Since drive 1 is always updated before drive 2,
the new block is copied from drive 1 to drive 2 whenever two comparable blocks differ, it is safe
to believe that drive 1 is the correct one. Both drives will be identical once the recovery process
is finished.

Another potential issue is a block’s natural deterioration. A previously valid block may suddenly
experience a checksum mistake without reason . The faulty block can be constructed from the
corresponding block on the other drive when such an error is discovered.

Checkpointing :

Backward error recovery calls for the system to routinely save its state onto stable storage in a
fault-tolerant distributed system. We need to take a distributed snapshot, often known as a
consistent global state, in particular. If a process P has recorded the receipt of a message in a
distributed snapshot, then there should also be a process Q that has recorded the sending of that
message. It has to originate somewhere, after all. 

Each process periodically saves its state to a locally accessible stable storage in backward error
recovery techniques. We must create a stable global state from these local states in order to
recover from a process or system failure. Recovery to the most current distributed snapshot, also
known as a recovery line, is recommended in particular. In other words, as depicted in Fig., a
recovery line represents the most recent stable cluster of checkpoints.

Coordinated Checkpointing : 

As the name suggests, coordinated checkpointing synchronises all processes to write their state
to local stable storage at the same time. Coordinated checkpointing’s key benefit is that the saved
state is automatically globally consistent, preventing cascading rollbacks that could cause a
domino effect.

Message Logging :

The core principle of message logging is that we can still obtain a globally consistent state even
if the transmission of messages can be replayed, but without having to restore that state from
stable storage. Instead, any communications that have been sent since the last checkpoint are
simply retransmitted and treated appropriately.
 

As system executes, messages are recorded on stable storage. A message  is called as logged if
its data and index of stable interval that is stored are both recorded on stable storage. In above
Fig. you can see logged and unlogged images denoted by different arrows. The idea is if
transmission of messages is replayed, we can still reach a globally consistent state. so we can
recover logs of messages and continue the execution. 

Security in Distributed System


In a distributed system, one must consider many possible security risks. To mitigate these risks
there are a number of strategies that can be employed: 

 Encryption algorithms that protect data in transit and at rest.


 Firewalls that limit access to specific ports/cables.
 Intrusion detection systems that identify anomalous behavior among network services.
 Intrusion prevention systems (IPS) respond to attempted intrusions by initiating defensive
actions like blocking suspicious IP addresses or taking down compromised servers.

These measures may be insufficient, to identify attacks at the network level without help from
other sources. We can not only prevent malicious actors from gaining access to our machines
from other machines in the same firewall but can also monitor our own actions.

Reckless data sharing can significantly increase exposure to both the threats themselves and the
costs entailed in defending against them.

Goals of Distributed System Security:

Security in a distributed system poses unique challenges that need to be considered when
designing and implementing systems. A compromised computer or network may not be the only
location where data is at risk; other systems or segments may also become infected with
malicious code. Because these types of threats can occur anywhere, even across distances in
networks with few connections between them, new research has been produced to help determine
how well distributed security architectures are actually performing.

In the past, security was typically handled on an end-to-end basis. All the work involved in
ensuring safety occurred “within” a single system and was controlled by one or two
administrators. The rise of distributed systems has created a new ecosystem that brings with it
unique challenges to security. 

Distributed systems are made up of multiple nodes working together to achieve a common goal,
these nodes are usually called peers.
Security Requirements and Attacks Related to Distributed Systems:

A distributed system is composed of many independent units, each designed to run its own tasks
without communicating with the rest of them except through messaging service. This means that
a single point of failure can render a system completely incapable without any warning since
there is no single point that can perform all necessary operations.

Attacks related to distributed systems are an area of active research. There were two main
schools of thought, those who believed that network worms could be stopped by employing
firewalls and those who did not.

A firewall might do nothing about worms and their ability to spread across various types of
networks, especially wireless networks and the Internet. This was because although firewalls
were able to stop intruders from gaining access through the firewall, they were unable to stop a
worm from self-replicating.

To summarize, there are numerous attacks that can be used against a network worm that has to
do with breaking functionality and altering data, or simply deleting it.

DDBMS - Security in Distributed Databases


A distributed system needs additional security measures than centralized system, since there are
many users, diversified data, multiple sites and distributed control. In this chapter, we will look
into the various facets of distributed database security.

In distributed communication systems, there are two types of intruders −

 Passive eavesdroppers − They monitor the messages and get hold of private
information.
 Active attackers − They not only monitor the messages but also corrupt data by inserting
new data or modifying existing data.

Security measures encompass security in communications, security in data and data auditing.

Communications Security

In a distributed database, a lot of data communication takes place owing to the diversified
location of data, users and transactions. So, it demands secure communication between users and
databases and between the different database environments.

Security in communication encompasses the following −

 Data should not be corrupt during transfer.


 The communication channel should be protected against both passive eavesdroppers and
active attackers.
 In order to achieve the above stated requirements, well-defined security algorithms and
protocols should be adopted.

Two popular, consistent technologies for achieving end-to-end secure communications are −

 Secure Socket Layer Protocol or Transport Layer Security Protocol.


 Virtual Private Networks (VPN).

Data Security

In distributed systems, it is imperative to adopt measure to secure data apart from


communications. The data security measures are −

 Authentication and authorization − These are the access control measures adopted to
ensure that only authentic users can use the database. To provide authentication digital
certificates are used. Besides, login is restricted through username/password combination.
 Data encryption − The two approaches for data encryption in distributed systems are −
o Internal to distributed database approach: The user applications encrypt the data
and then store the encrypted data in the database. For using the stored data, the
applications fetch the encrypted data from the database and then decrypt it.
o External to distributed database: The distributed database system has its own
encryption capabilities. The user applications store data and retrieve them without
realizing that the data is stored in an encrypted form in the database.
 Validated input − In this security measure, the user application checks for each input
before it can be used for updating the database. An un-validated input can cause a wide
range of exploits like buffer overrun, command injection, cross-site scripting and
corruption in data.

Data Auditing

A database security system needs to detect and monitor security violations, in order to ascertain
the security measures it should adopt. It is often very difficult to detect breach of security at the
time of occurrences. One method to identify security violations is to examine audit logs. Audit
logs contain information such as −

 Date, time and site of failed access attempts.


 Details of successful access attempts.
 Vital modifications in the database system.
 Access of huge amounts of data, particularly from databases in multiple sites.

All the above information gives an insight of the activities in the database. A periodical analysis
of the log helps to identify any unnatural activity along with its site and time of occurrence. This
log is ideally stored in a separate server so that it is inaccessible to attackers.

Kerberos
Kerberos provides a centralized authentication server whose function is to authenticate users to
servers and servers to users. In Kerberos Authentication server and database is used for client
authentication. Kerberos runs as a third-party trusted server known as the Key Distribution
Center (KDC). Each user and service on the network is a principal. 

The main components of Kerberos are: 


 

 Authentication Server (AS): 


The Authentication Server performs the initial authentication and ticket for Ticket Granting
Service. 
 
 Database: 
The Authentication Server verifies the access rights of users in the database. 
 
 Ticket Granting Server (TGS): 
The Ticket Granting Server issues the ticket for the Server 
 

Kerberos Overview: 

 
 

 Step-1: 
User login and request services on the host. Thus user requests for ticket-granting service. 
 
 Step-2: 
Authentication Server verifies user’s access right using database and then gives ticket-granting-
ticket and session key. Results are encrypted using the Password of the user. 
 
 Step-3: 
The decryption of the message is done using the password then send the ticket to Ticket
Granting Server. The Ticket contains authenticators like user names and network addresses. 
 
 Step-4: 
Ticket Granting Server decrypts the ticket sent by User and authenticator verifies the request
then creates the ticket for requesting services from the Server. 
 
 Step-5: 
The user sends the Ticket and Authenticator to the Server. 
 
 Step-6: 
The server verifies the Ticket and authenticators then generate access to the service. After this
User can access the services. 

Kerberos Limitations

 Each network service must be modified individually  for use with Kerberos
 It doesn’t work well in a timeshare environment
 Secured Kerberos Server
 Requires an always-on Kerberos server
 Stores all passwords are encrypted with a single key
 Assumes workstations are secure
 May result in cascading loss of trust.
 Scalability

Is Kerberos Infallible?

No security measure is 100% impregnable, and Kerberos is no exception. Because it’s been
around for so long, hackers have had the ability over the years to find ways around it, typically
through forging tickets, repeated attempts at password guessing (brute force/credential stuffing),
and the use of malware, to downgrade the encryption.  

Despite this, Kerberos remains the best access security protocol available today. The protocol is
flexible enough to employ stronger encryption algorithms to combat new threats, and if users
employ good password-choice guidelines, you shouldn’t have a problem!

What is Kerberos Used For?

Although Kerberos can be found everywhere in the digital world, it is commonly used in secure
systems that rely on robust authentication and auditing capabilities. Kerberos is used for Posix,
Active Directory, NFS, and Samba authentication. It is also an alternative authentication system
to SSH, POP, and SMTP.

Secure Socket Layer (SSL)


Secure Socket Layer (SSL) provides security to the data that is transferred between web
browser and server. SSL encrypts the link between a web server and a browser which ensures
that all data passed between them remain private and free from attack. 

Secure Socket Layer Protocols: 

 SSL record protocol


 Handshake protocol
 Change-cipher spec protocol
 Alert protocol

SSL Protocol Stack:  

SSL Record Protocol: 

SSL Record provides two services to SSL connection. 

 Confidentiality
 Message Integrity

In the SSL Record Protocol application data is divided into fragments. The fragment is
compressed and then encrypted MAC (Message Authentication Code) generated by algorithms
like SHA (Secure Hash Protocol) and MD5 (Message Digest) is appended. After that encryption
of the data is done and in last SSL header is appended to the data. 

 
Handshake Protocol: 

Handshake Protocol is used to establish sessions. This protocol allows the client and server to
authenticate each other by sending a series of messages to each other. Handshake protocol uses
four phases to complete its cycle. 

 Phase-1: In Phase-1 both Client and Server send hello-packets to each other. In this IP session,
cipher suite and protocol version are exchanged for security purposes. 
 Phase-2: Server sends his certificate and Server-key-exchange. The server end phase-2 by
sending the Server-hello-end packet. 
 Phase-3: In this phase, Client replies to the server by sending his certificate and Client-exchange-
key. 
 Phase-4: In Phase-4 Change-cipher suite occurred and after this Handshake Protocol ends. 
 
SSL Handshake Protocol Phases diagrammatic representation

Change-cipher Protocol: 

This protocol uses the SSL record protocol. Unless Handshake Protocol is completed, the SSL
record Output will be in a pending state. After the handshake protocol, the Pending state is
converted into the current state. 
Change-cipher protocol consists of a single message which is 1 byte in length and can have only
one value. This protocol’s purpose is to cause the pending state to be copied into the current
state. 
Alert Protocol: 

This protocol is used to convey SSL-related alerts to the peer entity. Each message in this
protocol contains 2 bytes.

The level is further classified into two parts: 


 

Warning (level = 1): 


This Alert has no impact on the connection between sender and receiver. Some of them are:

Bad certificate: When the received certificate is corrupt.


No certificate: When an appropriate certificate is not available.
Certificate expired: When a certificate has expired.
Certificate unknown: When some other unspecified issue arose in processing the certificate,
rendering it unacceptable.
Close notify: It notifies that the sender will no longer send any messages in the connection.
 

Fatal Error (level = 2): 

This Alert breaks the connection between sender and receiver. The connection will be stopped,
cannot be resumed but can be restarted. Some of them are :

Handshake failure: When the sender is unable to negotiate an acceptable set of security
parameters given the options available.
Decompression failure: When the decompression function receives improper input.
Illegal parameters: When a field is out of range or inconsistent with other fields.
Bad record MAC: When an incorrect MAC was received.
Unexpected message: When an inappropriate message is received.
The second byte in the Alert protocol describes the error.

Silent Features of Secure Socket Layer: 

 The advantage of this approach is that the service can be tailored to the specific needs of the
given application.
 Secure Socket Layer was originated by Netscape.
 SSL is designed to make use of TCP to provide reliable end-to-end secure service.
 This is a two-layered protocol.

Versions of SSL:

SSL 1 – Never released due to high insecurity.


SSL 2 – Released in 1995.
SSL 3 – Released in 1996.
TLS 1.0 – Released in 1999.
TLS  1.1 – Released in 2006.
TLS 1.2 – Released in 2008.
TLS 1.3 – Released in 2018.

Cryptography and its Types


Cryptography is technique of securing information and communications through use of codes so
that only those person for whom the information is intended can understand it and process it.
Thus preventing unauthorized access to information. The prefix “crypt” means “hidden” and
suffix graphy means “writing”. In Cryptography the techniques which are use to protect
information are obtained from mathematical concepts and a set of rule based calculations known
as algorithms to convert messages in ways that make it hard to decode it. These algorithms are
used for cryptographic key generation, digital signing, verification to protect data privacy, web
browsing on internet and to protect confidential transactions such as credit card and debit card
transactions. 

Techniques used For Cryptography: In today’s age of computers cryptography is often


associated with the process where an ordinary plain text is converted to cipher text which is the
text made such that intended receiver of the text can only decode it and hence this process is
known as encryption. The process of conversion of cipher text to plain text this is known as
decryption. 

Features Of Cryptography are as follows:

1. Confidentiality: Information can only be accessed by the person for whom it is intended
and no other person except him can access it.
2. Integrity: Information cannot be modified in storage or transition between sender and
intended receiver without any addition to information being detected.
3. Non-repudiation: The creator/sender of information cannot deny his intention to send
information at later stage.
4. Authentication: The identities of sender and receiver are confirmed. As well as
destination/origin of information is confirmed.

Types Of Cryptography: In general there are three types Of cryptography:

1. Symmetric Key Cryptography: It is an encryption system where the sender and


receiver of message use a single common key to encrypt and decrypt messages.
Symmetric Key Systems are faster and simpler but the problem is that sender and
receiver have to somehow exchange key in a secure manner. The most popular symmetric
key cryptography system is Data Encryption System(DES).
2. Hash Functions: There is no usage of any key in this algorithm. A hash value with fixed
length is calculated as per the plain text which makes it impossible for contents of plain
text to be recovered. Many operating systems use hash functions to encrypt passwords.
3. Asymmetric Key Cryptography: Under this system a pair of keys is used to encrypt and
decrypt information. A public key is used for encryption and a private key is used for
decryption. Public key and Private Key are different. Even if the public key is known by
everyone the intended receiver can only decode it because he alone knows the private
key.

Applications Of Cryptography:

1. Computer passwords
2. Digital Currencies
3. Secure web browsing
4. Electronic Signatures
5. Authentication
6. Cryptocurrencies
7. End-to-end encryption

You might also like