100% found this document useful (1 vote)
283 views121 pages

Data Structure Final Print PDF

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (1 vote)
283 views121 pages

Data Structure Final Print PDF

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 121

DATA STRUCTURE AND

ALGORITHM
FINAL ASSIGNMENT

SUBMITTED BY: SANTOSH ACHARYA

APRIL 28, 2019


INTERNATIONAL SCHOOL OF MANAGEMENT AND TECHNOLOGY
TTINKUNE, KATHMANDU
Data Structure and Algorithm 2019

Contents
PART I: Examine abstract data types, concrete data structures and algorithms. .................................. 5

Introduction ........................................................................................................................................ 6

Data Structure ................................................................................................................................ 6

Abstract Data Types (ADTs) and Specifications ........................................................................... 7

Example of ADT Specification ...................................................................................................... 8

Operations in Data Structure ........................................................................................................ 10

Stack ............................................................................................................................................. 13

Use of Stack ................................................................................................................................. 14

Applications of Stack ................................................................................................................... 15

Operations in Stack ...................................................................................................................... 16

Push Operation ............................................................................................................................. 16

Algorithm and Flow chart ........................................................................................................... 17

Implementation of Push Operation in C ...................................................................................... 18

Pop Operation .............................................................................................................................. 18

Algorithm and Flowchart to explain Pop Operation .................................................................... 19

Implementation of Stack .............................................................................................................. 20

Example of Implementing Stack on Link List in Java ................................................................. 20

Real world Example of Implementing Stack ............................................................................... 22

Queue ........................................................................................................................................... 22

Representation of Queue .............................................................................................................. 24

Queue Operations......................................................................................................................... 24

Algorithm For Enqueue Operation .............................................................................................. 25

Applications of Queue ................................................................................................................. 27

Implementation of Queue in Data Structure ................................................................................ 28

Sorting Algorithms....................................................................................................................... 31

1. Bubble Sort ........................................................................................................................ 35

1
Santosh Acharya (HND / Third Semester)
Data Structure and Algorithm 2019

Performance of Bubble Sort......................................................................................................... 36

Efficiency of Bubble Sort ............................................................................................................ 36

Complexity Analysis of Bubble Sort ........................................................................................... 37

Implementation of Bubble Sort Algorithm .................................................................................. 37

2. Quick Sort Algorithm ........................................................................................................ 38

Strength of Quick Sort ................................................................................................................. 39

Weakness of Quick Sort............................................................................................................... 39

Complexity of Quick Sort ............................................................................................................ 41

Comparison between Bubble Sort Algorithm and Quick Sort Algorithm ................................... 42

Shortest Path Algorithm ............................................................................................................... 44

Working Mechanism .................................................................................................................... 46

Steps to Calculate Shortest Distance Using Dijkstas ................................................................... 47

Pseudo Code of Dijkstra Algorithm ............................................................................................. 48

Implementation of Dijkstra’s Algorithm in Java ......................................................................... 49

Applications of Dijkstra’s Algorithm .......................................................................................... 51

Working Mechanism of Bell-Ford Algorithm ............................................................................. 53

Steps to calculate shortest path using Bellman-Ford ................................................................... 53

Application of Bell-Ford Man Algorithm .................................................................................... 56

Comparison of Dijkstra’s Algorithm with Bell-Ford Algorithm ................................................. 58

Conclusion ................................................................................................................................... 59

Part 2: Write an article based on the following key aspects which will be published in an IT magazine.
.............................................................................................................................................................. 60

Abstract data type and Object oriented programming ..................................................................... 61

ADT Specification ............................................................................................................................... 61

Stack ............................................................................................................................................. 62

Why Stack is an ADT? ................................................................................................................ 62

Difference between information hiding and Encapsulation: ....................................................... 66

2
Santosh Acharya (HND / Third Semester)
Data Structure and Algorithm 2019

Imperative ADTs are a basic for object orientation: .................................................................... 67

Object Oriented Programing Language ....................................................................................... 68

Features of Object-Oriented Programming Language ................................................................. 68

Limitations/Disadvantages of OOP: ............................................................................................ 69

Data Encapsulation ...................................................................................................................... 69

Runtime Polymorphism in Java ................................................................................................... 72

Abstraction ................................................................................................................................... 73

Part 3: Prepares a formal written report that includes the following: ................................................... 1

ABSTRACT ....................................................................................................................................... 3

1. INTRODUCTION ...................................................................................................................... 3

Implementation of Complex ADT using binary search algorithm ................................................ 4

2.1 Tree .......................................................................................................................................... 4

Binary Search Algorithm ............................................................................................................... 5

Advantage and Disadvantages of Binary Search Tree Algorithm ................................................. 5

Binary Search Tree Implementation .............................................................................................. 6

Operations in Binary Search .......................................................................................................... 7

Implementation In java .................................................................................................................. 8

Complete Example of the Code: .................................................................................................. 12

Binary Search Algorithm Implementation II ............................................................................... 15

Complexity of an algorithm ......................................................................................................... 15

Best case complexity.................................................................................................................... 16

Worst case complexity ................................................................................................................. 16

Error Handling and Report Testing.............................................................................................. 17

Example Showing Error in Java .................................................................................................. 18

Example Showing Exception in Java ........................................................................................... 19

Exception Handling ..................................................................................................................... 20

Advantage of Error and Exception Handling............................................................................... 21

3
Santosh Acharya (HND / Third Semester)
Data Structure and Algorithm 2019

Exception Handling in Java ......................................................................................................... 21

Try and Catch ............................................................................................................................... 22

Finally Block ................................................................................................................................ 23

Throw and Throws ....................................................................................................................... 24

Implementation of ADT/algorithm to solve well defined problem: ............................................ 26

Asymptotic analysis for effective algorithm: ............................................................................... 31

Interpret a trade-off specifying an ADT ...................................................................................... 38

Abstract data types (Implementation independent data structures) offer several advantages over
concrete data structures: ............................................................................................................... 40

Conclusion ........................................................................................................................................... 41

References ............................................................................................................................................ 42

4
Santosh Acharya (HND / Third Semester)
Data Structure and Algorithm 2019

PART I: Examine abstract data types, concrete data structures and algorithms.
You will need to prepare a written document which demonstrates the followings:

Create a design specification for data structures explaining the valid operations that can be
carried out on the structures and determine the operations of a memory stack and show how it
is used to implement function calls in a computer.

Illustrates, with an example, a concrete data structure for a First In First out (FIFO) queue and
compare the performance of two sorting algorithms. Further, produce an analysis of the
operation of the two network shortest path algorithms providing an example of each.

5
Santosh Acharya (HND / Third Semester)
Data Structure and Algorithm 2019

Introduction
Api Tech Pvt Ltd is a software company that works on projects based on machine learning, artificial
intelligence, IOT(internet of Things) etc. The company has recently submitted proposal to accomplish
the government project associated with traffic management through web and mobile application that
reads data through sensors. Also, the project will have biometric scanning. Before handing the project,
the government official needs to know the company understands on how different data structures are
used and manipulated and wants to know if company is able to integrate best and optimized algorithms
so as to develop efficient and accurate applications.

Thus, Working as the Software Engineer, on the one of the leading Software Company in Nepal. I
have given the responsibility to prepare this document to show our company understands on use of
different ADTs, different sorting and searching algorithms, their applications and efficiency. The
document is divided into four parts. In the first part, the report will present the definition of Data
Structure, Abstract Data Types, ADT specification with examples and Valid Operations will be
explained. Secondly, this part will cover the topic about stack, queue operations with their algorithms
and the application of them. Additional to these, sorting algorithms and the working mechanism will
be compared with their advantages as well as drawback of using them.

Data Structure
“The data structure is a way to collect and organize data in such a way that we can perform operations
on this data in an effective way. Data structures are the representation of data elements in terms of
some relationships, in order to organize and store better. For example, we have some data that they
have, and the name of the student "Santosh" is 21. Here "Santosh" is the data type String and 21 of the
correct data type” (Jorge, 2018). In simple words, Data structures are structures programmed to store
the required data, so that various operations can be performed on them easily. It represents the
knowledge of the data to be organized in the memory, which are designed and implemented in a way
that reduces complexity and increases efficiency.

Data Structure are classified into two types, Primitive Data Structure and Abstract Data Structure/Non
Primitive. Generally, the things that store data are data structures. Integer, Float, Boolean, Char etc.
are also examples of data structures but in more detail they are known as primitive data structures.

There are also complex types of data structures like Link List, Tree, Graph, Stack, Queue etc. which
can be defined as abstract data types because each of the element and structure have different tasks
and operations to perform.

6
Santosh Acharya (HND / Third Semester)
Data Structure and Algorithm 2019

Fig: Classification of Data Structure

Abstract Data Types (ADTs) and Specifications


“The abstract data type, sometimes abbreviated as ADT, is a logical description of how data and
processes can be displayed without taking into account how they are implemented. This means that we
only care about what the data represents and not how to build it at the end. By providing this level of
abstraction, we create encapsulation around the data. The idea is that by including the details of the
implementation, we hide them from the user's point of view” (J.Bern, 2017). In general terms, ADT
is a useful tool for specifying logical properties of data type. ADT’s Specification deserves what data
can, stored and how it can be used but not how it is implemented or represented in the program. It is
very useful for programmers who wish to use data type correctly. Abstract data types are not concerned
with the how data will be organized in memory and what algorithm is being fallowed for the
implementation of the operations. They are called abstract because they give the independent view of
the data.

The figure below will show what the abstract data type is and how it operates. The user interacts with
the interface, using the operations that have been specified by the abstract data type. The abstract data
type is the shell that the user interacts with. The implementation is hidden one level deeper. The user
is not concerned with the details of the implementation.

7
Santosh Acharya (HND / Third Semester)
Data Structure and Algorithm 2019

Fig: Working Mechanism of ATD

The implementation of an abstract data type, often called a data structure, will require an effective
view of the data using a set of programming structures and primitive data types. As we discussed
earlier, separating these two perspectives will allow us to identify complex data models of our
problems without giving any indication of the details of how the model is actually constructed. This
provides a separate view of the data execution. Because there are many different ways to implement
an abstract data type, this application will allow the programmer to change the execution details
without changing the way the user interacts with the data. The user can remain focused on the problem
solving process.

Example of ADT Specification


Here, is the example that is for the abstract data types of the rational numbers.

Rational numbers us a number that can be expressed as a ration of the two integers. The operations
that can be done on the rational numbers creation of the rational numbers, addition of the rational
numbers, multiplication of the rational numbers, testing for the equity etc. Here is the example of the
possible operations:

8
Santosh Acharya (HND / Third Semester)
Data Structure and Algorithm 2019

With the use of Abstract Data type we can do various types of operations in a simple and easier way.
The simple notation of the words will decide what type of operation should be performed, thus it will
reduce the complicity of the program and change the large program into simple and small form. It does
not specify how the data is organized in memory and which algorithms will be used to perform
operations. It is called "abstract" because it gives execution an independent view. The process of
providing the basics is known only and the details are hidden as abstraction.

A user of the data type does not need to know that the data type has been applied. For example, we
used the int, float, and char data types only with knowledge of the values that can be executed and the
operations to be performed. Therefore, the user only needs to know what the data type can do, but not
how he will do it. We can think of ADT as a black box that hides the internal structure and design of
the data type. Abstract Data Types are like user defined data types on which we can perform different
operations without a single knowledge about what is inside the data type and how the operations are
performed on them. In the program, they are used inside the class which are very helpful to manage
the code, reduce the complicity of the code and so on. The information inside the datatype will not be
exposed while performing classes. There are various benefits of using ADTs in the program some of
them are given below:

Encapsulation: ADT supports the feature of encapsulation thus the user-defined data type
exist as a complete entity, including the data definitions, default values, and value constraints,

9
Santosh Acharya (HND / Third Semester)
Data Structure and Algorithm 2019

this entity insures uniformity and consistency. Once defined, a user-defined data type may
participate in many other user-defined data types, such that the same logical data type, always
has the same definition, default values and value constraints, regardless of where it appears in
the database.
Reusability: As a hierarchy of common data structures are assembled, these can be re-used
within many definitions, which helps in insuring uniformity.
Time Saving: With the use of ADTs the program will be completed faster, because the
functions can be used with various part of the program thus it saves the time of programmer.
Flexibility: The ability to create real-world data representations of data allows the database
object designer to model the real world as it exists.
Easy to Understand: The normal non technical user can also understand the program and flow
of code with the help of ADTs. Thus it helps to reduce the complicity in the program.

There are many of the valid operations that can be done by the help of the abstract data type
specification. Some of them are insertion, deletion, merge, swapping, sorting searching etc.

ADT is an important part of object-oriented programming, an ADT implemented by specific data types
or data structures in many programming languages in formal specifications. ADTs are often applied
as units or methods that declare how to use the interface processes, sometimes with comments that
specify restrictions. Some commonly used ADTs proved useful in many applications, are Container,
List, Set, Multiset, Stack, Queue, Priority queue, Map, Multi-map, Graph, Tree, Double-ended queue,
priority queue.

All ADTs may not be necessarily equivalent. For example, stack may or may not have a count
operation for number of items pushed in and not yet popped.

Operations in Data Structure


The basic operations that are performed on data structures are as follows:

1. Insertion: Insertion means adding new element inside the data structure. Insert operation is to
insert one or more data elements into an array. Based on the requirement, new element can be
added at the beginning, end or any given index of array. Lets take an example of doing insertion
operation in stack which is called push operation. Push Operation is the insertion operation done
with data structures. The process of putting a new data element onto a stack is called push
operation. Push Operation Involves a series of steps:
Step 1- Checks if the stack is full.

10
Santosh Acharya (HND / Third Semester)
Data Structure and Algorithm 2019

Step 2- If the stack is full, produces an error and exist.


Step 3- If the stack is not full, increments top to point next empty space.
Step 4 - Add data element to stack location, where top is pointing.
Step 5 – Return Process.

Fig: Push Operation (Insertion)

Here is the example of implementation of algorithm in C.

From the above code, we can see, the logic to insert the data in the stack. First the program will check
whether the stack is full or not, if the stack is full it will display error message that the element cannot
be added because the stack is full. Otherwise, the element will be added from the top of the stack.

2. Deletion: Deletion refers to removing an existing element from the array and re-organizing all
elements of an array. We can take an example of deletion operation in the form. Deletion of the
node on the list is somehow more complex than insertion where the node which points the null
pointer will be breakage and then the null pointer will now point on to the node which it wants to
connect. The node, which we want to delete, will not contains any pointer or simply it will be
useless on the list. After certain time the JBM of the machine will automatically remove it from

11
Santosh Acharya (HND / Third Semester)
Data Structure and Algorithm 2019

the list if they are not used from time to time. Let’s look at the example to delete the node from
the operation.

Considering LA is a Linear Array, with N elements and K is the positive integer such that K<=N.

Output:

In the above program, the code is first inserted in the nodes and then the digit from the array will be
removed.

3. Searching Operation:

12
Santosh Acharya (HND / Third Semester)
Data Structure and Algorithm 2019

Stack
A stack is an Abstract Data Type which is applicable and used in various programming languages. It
works in the linear data structure which accepts the particular order in which operations are performed.
It is an example of data structure which is defined in data structure and operations and mostly used to
store temporary data inside the program or computer. The operations performed within the stack are
in the order that may be in the Last In First Out (LIFO) or First In Last Out(FILO). The stack in the
programming language behaves like the real-world stack, for example – a deck of cards or a pile of
plates, etc. In general, stack is an ordered collection of items into which new items may be inserted
and from which items may be deleted at one end. Normally the stack are in the LIFO Structure and it
is the ordered list of Elements of Same Type. In Stack all Operations such as Insertion and Deletion
are permitted at only one end called Top of the Stack. In the real world stack or in the programming
language the stack operation can be performed from the only one side.

The order in which elements come off a stack gives rise to its alternative name, LIFO (last in, first
out). Additionally, a peek operation may give access to the top without modifying the stack. The name
"stack" for this type of structure comes from the analogy to a set of physical items stacked on top of
each other, which makes it easy to take an item off the top of the stack, while getting to an item deeper
in the stack may require taking off multiple other items first. LIFO is the principle that tells to user the
data on the container in such a way that the data inserted inside the container will become last in and
first out. Suppose at our home, we have multiple chairs then we put them together to form a vertical
pile. From that vertical pile the chair which is placed last is always removed first.

Fig: Stack of Chair

13
Santosh Acharya (HND / Third Semester)
Data Structure and Algorithm 2019

Chair which was placed first is removed last. In this way we can see how stack is related to us. Stack is
very useful in programming languages and can be implemented with the Linked List or Array. Normally
the stack are in the LIFO Structure and it is the ordered list of Elements of Same Type. In Stack all
Operations such as Insertion and Deletion are permitted at only one end called Top of the Stack

Fig: Representation of LIFO Stack

To be more familiar with the stack we can give an example of undo/redo option in text editor program
which will make all the text changes simultaneously. Likewise, in In language processing, to create space
internally for local variables, and parameters stack is implemented. In todays, computer world stack is like
a basic data structure which is very easy to understand and to conceptually grasp. It makes the programmer
easy to visualize the problems easily thus it is very important topic to data structure.

Use of Stack
Stack are being used in the computer since a decade for various purposes like expression evaluation,
subroutine return address storage, dynamically allocated local variable storage, subroutine parameter
passing and so on. Expression evaluation stacks were the first kind of stacks to be widely supported
by special hardware. As a compiler interprets an arithmetic expression, it must keep track of
intermediate stages and precedence of operations using an evaluation stack. In the case of an
interpreted language, two stacks are kept. One stack contains the pending operations that await
completion of higher precedence operations. The other stack contains the intermediate inputs that are
associated with the pending operations. In a compiled language, the compiler keeps track of the
pending operations during its instruction generation, and the hardware uses a single expression
evaluation stack to hold intermediate results. Secondly, the solution to the recursion problem is to use

14
Santosh Acharya (HND / Third Semester)
Data Structure and Algorithm 2019

a stack for storing the subroutine return address. As each subroutine is called, the machine saves the
return address of the calling program on a stack. This ensures that subroutine returns are processed in
the reverse order of subroutine calls, which is the desired operation. Since new elements are allocated
on the stack automatically at each subroutine call, recursive routines may call themselves without any
problems. Modern machines usually have some sort of hardware support for a return address stack. In
conventional machines, this support is often a stack pointer register and instructions for performing
subroutine calls and subroutine returns. This return address stack is usually kept in an otherwise unused
portion of program memory. Thirdly, the final common use for a stack in computing is as a subroutine
parameter stack. Whenever a subroutine is called, it must usually be given a set of parameters upon
which to act. Those parameters may be passed by placing values in registers, which has the
disadvantage of limiting the possible number of parameters. The parameters may also be passed by
copying them or pointers to them into a list in the calling routine's memory. In this case, reentrancy
and recursion may not be possible. The most flexible method is to simply copy the elements onto a
parameter stack before performing a procedure call. The parameter stack allows both recursion and
reentrancy in programs.

Applications of Stack
There are various applications of stacks in data structure. Most of the algorithms use stacks to perform
the operations. Some of the major applications where stack are used are given below:

1. String Reversal: To reverse some words or the strings in the program, stack is used. i.e. if we
take the word or stack of characters “ISMTCOLLEGE” in the stack with push operation then
the word start from I will be inserted in the stack first then with using string reversal we can
pop the word started with E which means “EGELLOCTMSI”. Thus, with stack in string
reversal recent elements will be popped.
2. Balance Parenthesis: It is used to solve the parenthesis problem in the program or in the
system. To check randomly given parenthesis are balanced or not with the stack property.
3. Redo/Undo Feature: One of the most popular feature in text editor applications, data
processing application and other common applications is undo/ redo feature. This is also the
important application of stack. Suppose we write something or doing something and if we wish
to revert to previous state than stack will be implemented. With the use of two stacks, one will
be for move in previous state and another for redo or go in the last modified state.

15
Santosh Acharya (HND / Third Semester)
Data Structure and Algorithm 2019

4. Function Call: It is used to keep information about the active functions or subroutines. If we
have different applications and if we wish the execute the program than among different
programs one will be executed and another will be suspended.
5. System Parsing: Many compilers use the stack for parsing the syntax of expressions, program
blocks etc. before translating into low level code or machine level code.
6. Infix to Postfix /Prefix conversion: Stack is used for the evaluation of the prefix, postfix and
infix expression i.e. similar to the concept of the stacks. So it become easier to apply the same
concept for the comparison of the Expressions.

Additional to these, stack can be implemented almost anywhere. Other applications can be
Backtracking, Knight tour problem, rat in a maze, N queen problem and sudoku solver. Like in In
Graph Algorithms like Topological Sorting and Strongly Connected Components, and in many
algorithms like Tower of Hanoi, tree traversals, stock span problem, histogram problem etc.

Operations in Stack
Stack is a linear data structure which follows a particular order in which the operations are performed.
The order may be LIFO(Last In First Out) or FILO(First In Last Out).

Mainly the following basic operations are performed in the stack:

 Push: Adds an item in the stack. If the stack is full, then it is said to be an Overflow condition.
 Pop: Removes an item from the stack. The items are popped in the reversed order in which
they are pushed. If the stack is empty, then it is said to be an Underflow condition.
 Peek or Top: Returns top element of stack.
 isEmpty: Returns true if stack is empty, else false.

Here we are going to describe stack LIFO operations more briefly.

Push Operation: The push Method is one the fundamental methods of this data structure. Without
this method, the stack would lose all meaning as a LIFO data structure. Therefore, since it is so
important, we will cover in detail how the push method works in the implementation above.
Essentially, the push method can be broken down into the following steps:

 Step 1: Checks if the stack is full.


 Step 2: If the stack is full, produces an error and exit.
 Step 3: If the stack is not full, increments top to point next empty space.
 Step 4: Adds data element to the stack location, where top is pointing.

16
Santosh Acharya (HND / Third Semester)
Data Structure and Algorithm 2019

 Step 5: Returns success.

If the linked list is used to implement the stack, then in step 3, we need to allocate space dynamically.
The above algorithm can be visualized in the form of flowchart to understand more easily.

Algorithm and Flow chart

Fig: Push Method Flow Diagram

17
Santosh Acharya (HND / Third Semester)
Data Structure and Algorithm 2019

From the above algorithm and flow diagram we can understand how push operations works in the
stack. Basically, push refers to the way of adding element inside the stack or container. From the figure
above we have general idea that, push operation is done only if there is space to push the element.
First, in the program array will be defined with the index value and size will be determined then after,
condition will be checked either the container is empty or not, if the condition is true the value can be
pushed in the stack otherwise message will be generated showing container is full. The push is done
until the size of an array is not full. With each increment of the array the index of the array will be
increases and value will be set to the higher indexes every time.

Implementation of Push Operation in C

Fig: Implementation of Push Operation

Pop Operation: The pop operation is just opposite to the pop method. Pop refers to the deleting
elements from the stack in the reversed order which they are inserted. In this method, the value will be
removed from the stack with decrementing top. If we delete any element inside the stack the stack top
will be decreased by 1. In an array implementation of pop() operation, the data element is not actually
removed, instead top is decremented to a lower position in the stack to point to the next value. But in
linked-list implementation, pop() actually removes data element and deallocates memory space. A Pop
operation may involve the following steps:

 Step 1: Checks if the stack is empty.


 Step 2: If the stack is empty, produces an error and exit.
 Step 3: If the stack is not empty, accesses the data element at which top is pointing.
 Step 4: Decreases the value of top by 1.
 Step 5: Returns success.

18
Santosh Acharya (HND / Third Semester)
Data Structure and Algorithm 2019

Fig: Pop Operation

Algorithm and Flowchart to explain Pop Operation

Fig: Algorithm to define pop operation

Fig: Flow Chart to show Pop Operation

19
Santosh Acharya (HND / Third Semester)
Data Structure and Algorithm 2019

As from the given algorithm and flow diagram, it is clear that pop operation is initialized if the stack
is full or contain at least one element. First condition is checked, whether the stack is empty or not, if
the stack is empty then program will be terminated with throwing exception otherwise it will goes to
second step. In the second step, the data element from the top element will be decremented to a lower
position in the stack to point to the next value.

Implementation of Stack
A stack can be implemented by means of Array, Structure, Pointer, and Linked List. Either stack can
be a fixed size one or it may have a sense of dynamic resizing. We can implement stack from two
ways.

 With the help of the array we can implement it


 Or with the use of Array List.

Example of Implementing Stack on Link List in Java

20
Santosh Acharya (HND / Third Semester)
Data Structure and Algorithm 2019

Output of the Program:

Implementation stack on the link list can grow and shrink according to the needs at runtime. But it
may require extra memory due to involvement of pointers.

21
Santosh Acharya (HND / Third Semester)
Data Structure and Algorithm 2019

Real world Example of Implementing Stack


The most common example that every Java developer has been exposed to is the call stack. For the
sake of brevity, we will abbreviate it to CS. If we create a recursive function, each time the function is
called, it is will be added onto the CS. The last method/function that is called and returned is the first
that will be popped off the CS. Let me illustrate this with a simple demonstration.

In the example above, we are calling the factorial method recursively. A recursive function is a function
that calls itself. What happens is until we reach the base case: I.E. where the function actually returns
a value the function will call itself. Each time a function calls itself, the function call is added onto the
CS. Once the base case is reached, the last function call is popped off the call-stack and a domino
effect occurs until we return with the actual value. Since each function call has its own scope, running
a recursive function can be costly, because the memory continues to pile up on the CS until the base
case is reached.

Queue
Queue is also a kind of abstract data type, quite much like Stack. not like stack, a queue is open at each
its ends. One way of queue is usually used to insert data (enqueue) and the another way is used to take
away facts (dequeue). Queue follows First-In-First-Out technique, i.e., the records item saved first may
be accessed first. In the queue elements are kept in the order, at the one end the data is inserted which
is called rare end and from the another end the data element from the queue will be deleted which is
called front end. Generally, queue is mostly used in our daily life to operate basic tasks. In Queue First
In First Out (FIFO) method will be used. Thus according to the FIFO, oldest entry in the database or
the first person in the queue will get chance to access the information or do something. Within the
concept of the programming queue also implements the FIFO era wherein it includes of the two
outdoors. The outdoor of the queue is referred to as the front or head and the insert door is known as
rear or that tail door. also there may be the terms referred to as In-queue which implies on the insertion
of the information and de-queue which implies out of the data.

22
Santosh Acharya (HND / Third Semester)
Data Structure and Algorithm 2019

Fig: FIFO Method used in One way road

An actual example of queue may be a single-lane one-way street, in which the car enters first,
exits first. More actual-world examples may be visible as queues on the ticket windows and
bus-stops. In Computer, The satisfactory example to symbolize the queue is the print jobs.
If the systems are linked within the network with the common print using the print sharing
network. So, every time the users provide their documents for printing, the jobs will be stored
in a queue and the process which first enters the print queue will be published first, which
complies the concept of queue FIFO.

Similarly, Another Example of using Queue can be token system used in the Bank. When the
Customer enter the virtual queue line the moment they take a ticket using a self service kiosk.
Once their ticket number reaches the teller, the customer is intimated using digital signage
solutions allowing them to approach the specific counter. This advanced queuing system
frees customer from waiting in long queues and creates a much pleasant and favorable
environment whereby the user/customer can indulge in impulse buying. Thus, with the help
of queue system the bank can provide exceptional service with a delightful customer service
and satisfaction along with the banks operational performance and efficiency.

Fig: Example of Queue and Queue Management System In Bank

23
Santosh Acharya (HND / Third Semester)
Data Structure and Algorithm 2019

Representation of Queue
As we now understand that in queue, we get right of entry to each ends for one-of-a-kind reasons. The
diagram given below attempts to give an explanation for queue representation as data structure.

Fig: Queue representation in data structure

As in stacks, a queue can also be implemented using Arrays, Linked-lists, Pointers and Structures. For
the sake of simplicity, we shall implement queues using one-dimensional array.

Queue Operations
Queue operations may involve initializing or defining the queue, utilizing it, and then completely
erasing it from the memory. Here we shall try to understand the basic operations associated with
queues:

1. Enqueue: Generally, Enqueue is the way of inserting the elements in the queue. The elements in
the queue are inserted or pushed from the rear only. To insert the elements in the queue or to
perform the enqueuer process we have to follow the following steps:
 Step 1: Check if the queue is full.
 Step 2: If the queue is full, produce overflow error and exit.
 Step 3: If the queue is not full, increment rear pointer to point the next empty space.
 Step 4: Add data element to the queue location, where the rear is pointing.
 Step 5: return success.

24
Santosh Acharya (HND / Third Semester)
Data Structure and Algorithm 2019

Sometimes, we also check to see if a queue is initialized or not, to handle any unforeseen situations.

Algorithm For Enqueue Operation

Implementation of Enqueue In C program

In the above example we can see the use of enqueuer operation in C program. From the example, it is
clear that enqueue is the process of inserting data or element in the queue container. First the condition
is applied whether the container is empty or not, if the container is fulled then the program will be
terminated generating error message. If there is space for the element, then first rear pointer will be
incremented then the value will be added into empty space. Like this, one or more element will be
inserted.

2. Dequeue: Dequeue is the process of accessing and removing the elements from the container. In
the dequeuer operation the data will be accessed from the front and then removed. It cannot be
underdone from the back side. For the dequeue operation we have to use following steps.

 Step 1 − Check if the queue is empty.

 Step 2 − If the queue is empty, produce underflow error and exit.

 Step 3 − If the queue is not empty, access the data where front is pointing.

 Step 4 − Increment front pointer to point to the next available data element.

25
Santosh Acharya (HND / Third Semester)
Data Structure and Algorithm 2019

 Step 5 − Return success.

Algorithm of Dequeue Operation

Implementation of Dequeue In C Programming

Fig: Example of Dequeue

26
Santosh Acharya (HND / Third Semester)
Data Structure and Algorithm 2019

In the above example, dequeue operation is used to fetch and delete element from the queue. First,
the condition will be applied to check queue underflow, if the queue is empty then error will be
generated otherwise the next step will be followed. We can use size to check empty queue i.e. if
(size <= 0). Then the element will be copied at the front of the queue to some temporary variable
saying data = queue[front]. Which means if suppose queue capacity is 100 and size is 10 and rear
is at 9 which means front will be at 99. Now when we dequeue element from queue, front must get
updated to 0 instead of 100. Otherwise array index will go beyond its bounds. Then, decrease the
queue size by 1. In this way, dequeue operation can be done.

To make the above operations more efficient and usable there are other supporting operations and
they are given below:

 Peek() − Gets the element at the front of the queue without removing it.
 isfull() − Checks if the queue is full.
 isempty() − Checks if the queue is empty.

Applications of Queue
As the name suggests it is used each time we need to manipulate any group of objects in an order in
which the first one coming in, additionally receives out first at the same time as the others await their
turn, like in the following situations:

 Serving requests on a single shared resource, like a printer, CPU project scheduling etc.
 In actual life scenario, call center smartphone structures uses Queues to preserve people calling
them in an order, until a provider consultant is free.
 Handling of interrupts in real life systems. The interrupts are handled within the same order as
they come i.e First come first served.
 They are used in simulation of traffic control system.
 Used in printer to print the pages turn by turn.
 When data is transferred asynchronously (data not necessarily received at same rate
as sent) between two processes. Examples include IO Buffers, pipes, file IO, etc.
 Buffers on MP3 players and portable CD players, iPod playlist. Playlist for jukebox - add songs
to the end, play from the front of the list.
 When programming a real-time system that can be interrupted (e.g., by a mouse click or
wireless connection), it is necessary to attend to the interrupts immediately, before proceeding

27
Santosh Acharya (HND / Third Semester)
Data Structure and Algorithm 2019

with the current activity. If the interrupts should be handles in the same order they arrive, then
a FIFO queue is the appropriate data structure.

Implementation of Queue in Data Structure


Queue can be implemented using an Array, Stack or Linked List. The easiest way of implementing a
queue is by using an Array.

Initially the head(FRONT) and the tail(REAR) of the queue points at the first index of the array
(starting the index of array from 0). As we add elements to the queue, the tail keeps on moving ahead,
always pointing to the position where the next element will be inserted, while the head remains at the
first index.

When we remove an element from Queue, we can follow two possible approaches (mentioned [A] and
[B] in above diagram). In [A] approach, we remove the element at head position, and then one by one

28
Santosh Acharya (HND / Third Semester)
Data Structure and Algorithm 2019

shift all the other elements in forward position. In approach [B] we remove the element from head
position and then move head to the next position. In approach [A] there is an overhead of shifting the
elements one position forward every time we remove the first element. In approach [B] there is no
such overhead, but whenever we move head one position ahead, after removal of first element, the size
on Queue is reduced by one space each time.

Example of Implementing Queue using Array

29
Santosh Acharya (HND / Third Semester)
Data Structure and Algorithm 2019

Output of the Program:

The time complexity of enqueuer(), dequeuer(),peek(), isEmpty(), and size() function is constant. i.e
0(1).

30
Santosh Acharya (HND / Third Semester)
Data Structure and Algorithm 2019

We can also implement queue using array list and below is the example of implementing it using array
list.

In this way, we can implement queue in the various data operations and programs.

Sorting Algorithms
Sorting is not anything but arranging the information in ascending or descending order. The term
sorting got here into image, as human beings realized the importance of searching quick. There are so

31
Santosh Acharya (HND / Third Semester)
Data Structure and Algorithm 2019

many things in our real existence that we need to search for, like a specific document in database, roll
numbers in benefit list, a particular cellphone number in smartphone directory, a specific web page in
a ebook etc. All this would were a mess if the data become saved unordered and unsorted, however
happily the idea of sorting got here into existence, making it less difficult for everyone to set up data
in an order, as a result making it simpler to search. Sorting arranges records in a chain which makes
searching less complicated.

“A sorting algorithm is used to reorder a group of items into a specific order. This sort could be by
alphabetic order or some increasing or decreasing order. Sorting algorithms are also beneficial in
rapidly advancing fields like machine learning, partly due to the fact into the large records age and
beyond, one of the biggest capabilities of IT systems is to govern massive units of information. This
inherently includes quite a lot of sorting. In machine learning, in which the machine learns from big
sets of schooling records, sorting algorithms may be a prime element of the intellectual and
computational work involved in building the systems and implementing them” (Michel, 2018).

For example, if someone ask me, how you would arrange a deck of shuffled cards in order, I would
say I will start checking every card, and making the deck as I move on. It can takes me hours to arrange
the deck in order. But to solve such problems, or to arrange and sort data, computer scientists have
invented various sorting algorithms thus the time taken for the task will be reduce in a second. The
two main criteria they use to sort are: time taken to sort the given data and memory space required to
do so.

Fig: Simple Example of Sorting Algorithm

Sorting is crucial in programming for the identical purpose it's far important in everyday lifestyles. it's
far easier and faster to locate items in a sorted list than unsorted. Sorting algorithms may be used in a
software to sort an array for later searching or writing out to an ordered file or document.

In computer science, arranging in an ordered sequence is called "sorting". Sorting is a common


operation in lots of programs, and efficient algorithms to perform it had been developed. The most

32
Santosh Acharya (HND / Third Semester)
Data Structure and Algorithm 2019

common uses of sorted sequences are: making lookup or search efficient; making merging of
sequences efficient.

There are various methods available for sorting, differentiated by their efficiency and space
requirements. Some of them are given below:

1. Bubble Sort
2. Insertion Sort
3. Selection Sort
4. Quick Sort
5. Merge Sort
6. Heap Sort

The above algorithms are used to sort the data elements in the programming, they all are used in
different levels. We can made comparison of them on various ways but here we are going to compare
two of them based on their performance. Specially, to measure the performance the Bigo-O notation
is used. It is the standard, through which the performance of algorithmic functions are calculated.
Mainly, there are four different time complexities that reflects the performance of a function.

 O(1) - Constant Time Complexity


 O(n) - Linear Time Complexity
 O(log n) - Logarithmic Time Complexity
 O(n^2) - Quadratic Time complexity

Fig: Graph of different time complexities and their performance.

33
Santosh Acharya (HND / Third Semester)
Data Structure and Algorithm 2019

In the above figure, the graph is used to show the time complexities and the performance of the
algorithmic function. The (n) represents the number of array in the figure, as the number (n) increases
the performance and time complexities will be shown or measured. Below, we can see the comparison
of all the sorting algorithms with time complexity with average worst and best case.

From the above table, we can say that Merge Sort performs indeed quite well on average and does not
give bad results even in the worst case. However, for each situation another algorithm, Bubble and
Insertion Sorts when the array is already sorted and Quick Sort when the array is randomly sorted,
outperforms it. The reason is that when the array is already sorted, simple algorithms such as Bubble
Sort and Insertion Sort reach their best case scenario (but reaching their worst case scenario if the array
is sorted in reverse order). Furthermore, even if Quick Sort and Merge Sort have same complexity on
average and in best-case scenario, the constants (hidden by the big-O notation) are much smaller in
Quick Sort leading to a smaller running time on average. However, the pitfall is that Quick Sort
performs bad in the worst case scenario so it can be quite of a risky alternative.

Here we are going to select two of the algorithms for the comparison with different prospective of
view which includes the working mechanism, efficiency of the algorithm, performance with worst and
best case of the individual. For the comparison we have selected Bubble Sort and Quick Sort
Algorithm.

34
Santosh Acharya (HND / Third Semester)
Data Structure and Algorithm 2019

1. Bubble Sort
Bubble Sort algorithm is categorized on the one of the easier and simple algorithm used to arrange
and sort elements in a set of array with number of elements. Comparison of elements is done in the
bubble sort based on their values one by one. “Bubble sort is the simplest iterative algorithm
operates by comparing each item or element with the item next to it and swapping them if needed.
In simple words, it compares the first and second element of the list and swaps it unless they are out
of specific order. Similarly, Second and third element are compared and swapped, and this
comparing and swapping go on to the end of the list. The number of comparisons in the first iteration
are n-1 where n is the number elements in an array. The largest element would be at nth position
after the first iteration. And after each iteration, the number of comparisons decreases and at last
iteration only one comparison takes place” (Jorge, 2016).

Fig: Use of Bubble Sort

The logic behind the bubble sort algorithm is very simple it compare the values with all other values,
and find the value until the target value is found. If n is the number of elements in an array, then the
number of iterations will be n-1. If we need to find the largest number or the position of the largest
number, then the position of the largest number will be the nth position.

It is known as bubble sort, because with every complete iteration the largest element in the given array,
bubbles up towards the last place or the highest index, just like a water bubble rises up to the water

35
Santosh Acharya (HND / Third Semester)
Data Structure and Algorithm 2019

surface. Sorting takes place by stepping through all the elements one-by-one and comparing it with the
adjacent element and swapping them if required.

Performance of Bubble Sort


The best case scenario is depicted by O(n). In this instance, the algorithm executes in a time directly
proportional to the size of the array. The worst-case scenario of its operation occurs when the array
needs to be 'reverse sorted' and is depicted by O (n^2) where the time increases exponentially as the
number of sorted elements increase.

Strengths of Bubble Sort

 It is easy to understand
 Easy to implement
 No demand for large amounts of memory
 Once sorted, data is available for processing

Weakness of Bubble Sort

 Sorting takes a long time.


 It does not deal well with a list containing a huge number of items because it requires n-squared
processing steps for every n number of elements to be sorted.

Efficiency of Bubble Sort


No of Comparison = (n-1) (in each)

No of Passes = (n-1)

So, efficiency = No of comparison + No of Passes

=(n-1)*(n-1)

= 𝑛2 −2n + 1

Considering higher order only,

Efficiency = O(𝑛2 ).

36
Santosh Acharya (HND / Third Semester)
Data Structure and Algorithm 2019

Complexity Analysis of Bubble Sort


In Bubble Sort, n-1 comparisons will be done in the 1st pass, n-2 in 2nd pass, n-3 in 3rd pass and so
on. So the total number of comparisons will be,

Hence the time complexity of Bubble Sort is O(n2).

The main advantage of Bubble Sort is the simplicity of the algorithm. The space complexity for Bubble
Sort is O(1), because only a single additional memory space is required i.e. for temp variable. Also,
the best case time complexity will be O(n), it is when the list is already sorted.

Following are the Time and Space complexity for the Bubble Sort algorithm.

 Worst Case Time Complexity [ Big-O ]: O(n2)


 Best Case Time Complexity [Big-omega]: O(n)
 Average Time Complexity [Big-theta]: O(n2)
 Space Complexity: O(1)

Implementation of Bubble Sort Algorithm


Following are the steps involved in bubble sort(for sorting a given array in ascending order:

 Starting with the first element(index = 0), compare the current element with the next element
of the array.
 If the current element is greater than the next element of the array, swap them.
 If the current element is less than the next element, move to the next element. Repeat Step 1.

37
Santosh Acharya (HND / Third Semester)
Data Structure and Algorithm 2019

Let's consider an array with values {5, 1, 6, 2, 4, 3} Below, we have a pictorial representation of how
bubble sort will sort the given array.

So as we can see in the representation above, after the first iteration, 6 is placed at the last index, which
is the correct position for it. Similarly after the second iteration, 5 will be at the second last index, and
so on.

2. Quick Sort Algorithm


Quick Sort Algorithm is one of the popular algorithm among different sorting algorithms which is
mostly used for the short arrays. In the quick sort, elements from the set are divided into parts
repeatedly until it is not possible to divide further. It is also known as partition exchange sort because
it uses the pivot (key element) for partitioning the elements. Quick Sort make partitions to separate
and sort the elements in two different parts. In one partition, large numbers are separated and on the
other part small elements are separated than the key element, After partition the elements will be sorted
recursively.

“Quick Sort is also based on the concept of Divide and Conquer, just like merge sort. But in quick sort
all the heavy lifting(major work) is done while dividing the array into subarrays, while in case of merge
sort, all the real work happens during merging the subarrays. In case of quick sort, the combine step
does absolutely nothing. This algorithm divides the list into three main parts:

38
Santosh Acharya (HND / Third Semester)
Data Structure and Algorithm 2019

 Elements less than the Pivot element


 Pivot element(Central element)
 Elements greater than the pivot element

Pivot element can be any element from the array, it can be the first element, the last element or any
random element” (Jack, 2018). For example: In the array {52, 37, 63, 14, 17, 8, 6, 25}, we take 25 as
pivot. So after the first pass, the list will be changed like this. {6 8 17 14 25 63 37 52}

Hence after the first pass, pivot will be set at its position, with all the elements smaller to it on its left
and all the elements larger than to its right. Now 6 8 17 14 and 63 37 52 are considered as two separate
sub arrays, and same recursive logic will be applied on them, and we will keep doing this until the
complete array is sorted. Quick Sort Applies several steps such as:

 Step 1 − Choose the highest index value has pivot


 Step 2 − Take two variables to point left and right of the list excluding pivot
 Step 3 − left points to the low index
 Step 4 − right points to the high
 Step 5 − while value at left is less than pivot move right
 Step 6 − while value at right is greater than pivot move left
 Step 7 − if both step 5 and step 6 does not match swap left and right
 Step 8 − if left ≥ right, the point where they met is new pivot

Strength of Quick Sort


 Complexity of O(n log(n))
 Quick sort is one of the fastest sorting algorithms
 It possesses a good average case behavior.

Weakness of Quick Sort


 It is Hard to implement.
 It can be considered as Unstable sorting algorithm.
 Not in place sorting algorithm
 It is very Complex and very recursive

39
Santosh Acharya (HND / Third Semester)
Data Structure and Algorithm 2019

Fig: Sorting using Quick Sort

After selecting an element as pivot, which is the last index of the array in our case, we divide the array
for the first time. In quick sort, we call this partitioning. It is not simple breaking down of array into 2
subarrays, but in case of partitioning, the array elements are so positioned that all the elements smaller
than the pivot will be on the left side of the pivot and all the elements greater than the pivot will be on
the right side of it. And the pivot element will be at its final sorted position. The elements to the left

40
Santosh Acharya (HND / Third Semester)
Data Structure and Algorithm 2019

and right, may not be sorted. Then we pick subarrays, elements on the left of pivot and elements on
the right of pivot, and we perform partitioning on them by choosing a pivot in the subarrays.

Efficiency of Quick Sort

Assume that file size n is power of 2. i.e. n = 2𝑚 . Also that, Assume proper position of pivot is at
𝑛
middle. In first pass, there are n comparison (actually (n-1) and file splits into sub files of size and
2

so on.

𝑛 𝑛 𝑛
So the total comparisons = 𝑛 + 2 × ( 2) + 4 × ( 4) + ⋯ + 𝑛 × ( 2)

Thus, Efficiency = O(𝑛 × 𝑚)

= O(nlogm)

=O(nlogn).

Complexity of Quick Sort


For an array, in which partitioning leads to unbalanced subarrays, to an extent where on the left side
there are no elements, with all the elements greater than the pivot, hence on the right side.

And if keep on getting unbalanced subarrays, then the running time is the worst case, which is O(𝑛2 ).
Where as if partitioning leads to almost equal subarrays, then the running time is the best, with time
complexity as O(n*log n).

 Worst Case Time Complexity [ Big-O ]: O(𝑛2 ).


 Best Case Time Complexity [Big-omega]: O(n*log n)
 Average Time Complexity [Big-theta]: O(n*log n)
 Space Complexity: O(n*log n)

As we know now, that if subarrays partitioning produced after partitioning are unbalanced, quick sort
will take more time to finish. If someone knows that we pick the last index as pivot all the time, they
can intentionally provide us with array which will result in worst-case running time for quick sort.

To avoid this, we can pick random pivot element too. It won't make any difference in the algorithm,
as all we need to do is, pick a random element from the array, swap it with element at the last index,
make it the pivot and carry on with quick sort. Space required by quick sort is very less, only O(n*log

41
Santosh Acharya (HND / Third Semester)
Data Structure and Algorithm 2019

n) additional space is required. Quick sort is not a stable sorting technique, so it might change the
occurrence of two similar elements in the list while sorting.

Comparison between Bubble Sort Algorithm and Quick Sort Algorithm


There are various thing through which we can compare these two algorithms, first lets look the table
then we will discuss on the differences among them.

Based on… Quick Sort Bubble Sort


Type It is Sorting Algorithm It is also Sorting Algorithm
Method Split and win algorithm technique Swaps of two adjacent elements in
into which a pivotal element order to put them in right place
becomes the focal point of division
around the given array.
Time Complexity O(n log n) O(n^2)
Coding Complex Simpler
Performance Recursive, Faster Slower, Iterative
Time Consumption Less Time Consumption to run an More Time Consumption to run an
algorithm algorithm
Usefulness Considered to be more useful Considered to be less useful
Fig: Comparison Table of two sorting algorithms

Quick Sort and bubble Sort algorithms are two different algorithms used to sort the data. First, Based
on the method they perform or provide result we can compare them. Bubble Sort algorithm is quiet
easier and simpler because it makes multiple passes within a list. It compares adjacent elements and
exchange those who are out of order to put them in right place while quick sort takes long process to
sort out the elements. Quick Sort performs there basis steps like break a large array into smaller sub
arrays first, takes the pivot point from the elements and then re-order the array so that all the elements
with value less than pivot, while all elements with value greater than the pivot come after it. After this
partitioning, the pivot is in its final position. It applies on both sub arrays and elements until the
complete list is sorted the process will be done. Secondly, bubble sort use brute technique while quick
sort uses divide and conquer technique. In the one hand, Bubble Sort has a time complexity of O(n^2),
which means that the loop is exponentially increasing with increase in the value of n. If the value of n
is 2, ideally the loop is going to run 4 times, and if it is 4, it would run 16 times; and so on… Thus, it
would generate huge time issues when the value of n is large. In the other hand, Quick Sort has a time

42
Santosh Acharya (HND / Third Semester)
Data Structure and Algorithm 2019

complexity if O(n log n), which can possibly be less efficient than normal techniques, still it yields
much faster results. We can compare these algorithms based on the coding too. Undoubtedly, the
creation of Bubble sort is one of the easiest sorting algorithm approach, as per any coder perspective.
In fact, bubble sort is one of the first sorting techniques that coders are taught in order to introduce
them to the sorting world. Quick Sort, on the other hand, has a complex creation background. With
the involvement of pivotal points and sub-algorithm that sorts the sub arrays, it becomes a little
complex again. Additional to these, During large arrays sorting, Bubble Sort performs poorly due to
abundance of time consumption, thus it is mostly used for educational purpose, in order to make
concepts of sorting easier to grasp for beginners. Still, it has a respectable place in sorting arrays for a
lesser number of elements. Quick Sort is considered to be more useful in sorting as per industrial and
production value, since its has an quicker and recursive results, especially when compared to Bubble
Sort.

Overall, they both are really good algorithms to perform sorting operations as per the different data.
Bubble sort is simple easier algorithm but cannot be used in the large data, but quick sorting technique
is mostly used in spite of being lengthy and complex than other algorithms. Among these two
algorithms, we can say quick sort is much better than bubble sort algorithm. In terms of efficiency too,
it is better. In conclusion: no sorting algorithm is always optimal, we can choose whichever one suits
our needs. If anyone need an algorithm that is the quickest for most cases, and you don't mind it might
end up being a bit slow in rare cases, and you don't need a stable sort, use Quicksort. Otherwise, use
the algorithm that suits your needs better.

43
Santosh Acharya (HND / Third Semester)
Data Structure and Algorithm 2019

Shortest Path Algorithm


Shortest Path Algorithm is the classic program in graph data structure which is used to find the shortest
path in weighted graphs. They are manly used to find the path between two vertices in a graph such
that the total sum of the edges weights is minimum. Specially, these algorithms can be considered as
the family of algorithms specially designed to solve the shortest path problem. With the calculation,
and graphical representations path or way is discovered. Shortest path algorithms have many
applications. As noted earlier, mapping software like Google or Apple maps makes use of shortest path
algorithms. They are also important for road network, operations, and logistics research. Shortest path
algorithms are also very important for computer networks, like the Internet. With the help of shortest
path algorithms different application and software have been developed which help to estimate the
path to move there thus it will saves the time and money of the user. For example: If we want to go
Gongabu Buspark from Tinkune. We can apply shortest path algorithm, to reach there at very
reasonable bus fair and little amount of time. With this algorithm, we can calculate distance, obstacles
and so on to reach there. In real-life situations, the transportation network is usually stochastic and
time-dependent. In fact, a traveler traversing a link daily may experiences different travel times on that
link due not only to the fluctuations in travel demand (origin-destination matrix) but also due to such
incidents as work zones, bad weather conditions, accidents and vehicle breakdowns. To save the time
and money it will helps to take decision using different applications installed inside the mobile phone.

Fig: Shortest Path Algorithm with arrow representation

As shown in the above figure, we can make the words as the starting point and the destination point.
Thus, if we consider A is the starting point and E is the ending point and if we want to travel A to E

44
Santosh Acharya (HND / Third Semester)
Data Structure and Algorithm 2019

than we have to use different routes. Each routes have defined weight or the number, thus as per the
requirement we can travel there through the shortest path which is suitable for us. There are various
algorithms to calculate the shortest (weighted) path between the pair of nodes. On the basis of the
source, there are two types of Shortest Path Algorithms. They are given below:

Single Source: Single source shortest paths algorithms operates under the following principle:

If the goal of the algorithm is to find the shortest path between only two given vertices, s and t. and
then the algorithm can simply be stopped when that shortest path is found. Because there is no way
to decide which vertices to "finish" first, all algorithms that solve for the shortest path between two
given vertices have the same worst-case asymptotic complexity as single-source shortest path
algorithms. This paradigm also works for the single-destination shortest path problem. By reversing
all of the edges in a graph, the single-destination problem can be reduced to the single-source
problem. So, given a destination vertex t, this algorithm will find the shortest paths starting at all
other vertices and ending at t.
Bellman – Ford is the one of the popular single source shortest path finding algorithm.
All-Pairs Algorithm: All-pairs shortest path algorithms follow this definition:

The most common algorithm for the all-pairs problem is the floyd-warshall algorithm. This
algorithm returns a matrix of values M, where each cell 𝑀𝑖,𝑗 is the distance of the shortest path from
vertex to vertex i to j. Path reconstruction is possible to find the actual path taken to achieve that
shortest path, but it is not part of the fundamental algorithm.

The two popular shortest Path finding algorithms which are Dijkstra’s Algorithm and Bellman- Ford
Algorithm. The brief information and the overall comparison between them is given below:

1. Dijkstra’s Algorithm
Dijakstra’s algorithm is the single source algorithm which is used to find the shortest path between
the nodes in the graph. It is different from the minimum span tree because the shortest distance
between two lines may not include all the graph headers. Dijkstra’s algorithm was developed and
used to find the shortest path between the thousands of nodes. It is very difficult to find the shortest

45
Santosh Acharya (HND / Third Semester)
Data Structure and Algorithm 2019

path from thousands of nodes with searching through the direct physical effort thus to solve the
problem this algorithm was developed and programmed in computer. It is the algorithm through we
can measure the shortest distance through the different nodes. This algorithm selects any one of the
vertices as the source and another one as the destination. As we have to find out the shortest path, it is
called the minimization and it is the optimization problem. So, optimization problem can be solved
using greedy method. This algorithm follows greedy method thus the problem solved in stages by
taking one-step by time and considering one input at a time to get the optimum solution. In Greedy
method, there are predefined procedures and we follow that procedures to get an optimum solution.
Therefore, Dijkastra Algorithm gives a procedure for getting an optimum solution that is minimum
results to the shortest path.

Working Mechanism
Dijkstra's Algorithm works on the basis that any sub path B -> D of the shortest path A -> D between
vertices A and D is also the shortest path between vertices B and D.

Djikstra used this property in the opposite direction i.e we overestimate the distance of each vertex
from the starting vertex. Then we visit each node and its neighbors to find the shortest sub path to
those neighbors. As we told already the algorithm uses a greedy approach in the sense that we find the
next best solution hoping that the end result is the best solution for the whole problem. It solves the
single-source shortest path problem for a directed graph with non-negative edge weights. For example,
if the vertices (nodes) of the graph represent cities and edge weights represent driving distances
between pairs of cities connected by a direct road, Dijkstra's algorithm can be used to find the shortest
route between two cities. Also, this algorithm can be used for shortest path to destination in traffic
network.

46
Santosh Acharya (HND / Third Semester)
Data Structure and Algorithm 2019

Steps to Calculate Shortest Distance Using Dijkstas


 Start
 Take any node as the initial node or source and another as the destination node.
 Assign every node a tentative distance.
 Set initial node/ source node as the current and mark all the nodes as unvisited.
 For the current node, consider all the unvisited nodes and calculate tentative distance. Then,
Compare current distance with calculated distance and assign the smaller value.
 When all the neighbors, are considered of the current node, mark it as visited. [Visited nodes
wont be checked again]
 If the destination node is marked, visited stopped.
 End

Let’s take an example to understand the concept more clearly,

Here is the graph, with different nodes like a, b, c, d, e, and z. The weight of each direct connected
nodes is given in the figure, if we have to find out the shortest distance among the two nodes. First, we
have to assign one point or node as the source and another as the destination. Well, in our example we
are taking ‘A’ as our source node and ‘Z’ as our destination node. These two nodes are not connected
directly, but they are passed through different other nodes thus we have to go through the different
nodes to calculate the distance and to identify the shortest path among them. We have to assign
tentative distance of all the nodes then we have to set the initial or source node as the current, and mark

47
Santosh Acharya (HND / Third Semester)
Data Structure and Algorithm 2019

all other nodes as the unvisited. In the above figure, we have the direct connection of A to B and C
thus the distance is written there but we have denoted other nodes as infinity. After, we have to see the
shortest distance among A to B or A to C. In the figure, we have 2 in the A to C and 4 in the A to B.
We have to choose the shortest distance thus we have to go through A to C. A to B and A to C are
already visited thus we have to mark them as the visited nodes. We are currently on node C, so we
have to check all the possible routes connected to C. If we wish to go with C to E the value will be 12,
Similarly if we wish to move from C to D the value will be 10 and from C to B the distance will be
only 3. Thus we have to change the previous 4 into 3 and the minimum distance having the path will
be C to B. Again, we have to check the distance of connected path with B. B is directly connected with
in D thus the value of D will be 8 by replacing in the place of infinity. Finally, D is connected with
two nodes i.e. D to E and D to Z. If we go through D to Z the value 8+6=14 will be final distance but
if we go though the D to E the value will be 8+2 =10. Which is shortest path. Now, the current position
is on E thus from E to Z 10+3 = 13. which is the shortest among all the nodes.

Pseudo Code of Dijkstra Algorithm


We need to maintain the path distance of every vertex. We can store that in an array of size v, where
v is the number of vertices. We also want to able to get the shortest path, not only know the length of
the shortest path. For this, we map each vertex to the vertex that last updated its path length. Once the
algorithm is over, we can backtrack from the destination vertex to the source vertex to find the path.
A minimum priority queue can be used to efficiently receive the vertex with least path distance.

Fig: Pseudo Code for Dijkstra’s Algorithm

48
Santosh Acharya (HND / Third Semester)
Data Structure and Algorithm 2019

Implementation of Dijkstra’s Algorithm in Java


The idea of Dijkstra is simple. Dijkstra partitions all nodes into two distinct sets: unsettled and settled.
Initially all nodes are in the unsettled sets, e.g. they must be still evaluated. A node is moved to the
settled set if a shortest path from the source to this node has been found. Initially the distance of each
node to the source is set to a very high value. First only the source is in the set of unsettled Nodes. The
algorithms runs until the unsettled Nodes are empty. In each iteration it selects the node with the lowest
distance from the source out of the unsettled nodes. It reads all edges which are outgoing from the
source and evaluates for each destination node, in the edges which are not yet settled, if the known
distance from the source to this node can be reduced while using the selected edge. If this can be done
then the distance is updated and the node is added to the nodes which need evaluation.

The following is a simple implementation of Dijkstra’s algorithm. It does not use any performance
optimization (e.g. by using a Priority Queue for the Unsettled Nodes and does not cache the result of
the target evaluation of the edges) to make the algorithms as simple as possible.

49
Santosh Acharya (HND / Third Semester)
Data Structure and Algorithm 2019

50
Santosh Acharya (HND / Third Semester)
Data Structure and Algorithm 2019

Applications of Dijkstra’s Algorithm


 It is used in finding Shortest Path.
 It is used in geographical Maps.
 To find locations of Map which refers to vertices of graph.
 Distance between the location refers to edges.
 It is used in IP routing to find Open shortest Path First.
 It is used in the telephone network.

The most common use of dijkstra’s algorithm is that it helps in finding the shortest paths from the
source vertex to all other vertices in the graphs. The time complexity of this algorithm is O (𝑉 2 ) where
V is the set whose elements are vertices, nodes or the points. The value of the V must be non-negative
too. The complexity of this algorithm can be reduces by using it along with min-heap on Extra-Min
() functions which will returns the node from the smallest key and reduces the complexity. By this the
complexity is fully dependent on the Extract-Min () function.

In the worst case scenario of Dijkstra’s Algorithm the total number of the edges will be v (v-1)/2 where
v is the number of vertices. I.e. edges >> V and edges ~ (𝑉 2 ). There are several of the time complexity
according to the cases. Dijkstra’s Algorithm with adjacency list and priority queue will be O ((v+e)
log v) and in the worst case the same equation will be e >> v so O (e log v). Here, e is equal to the
edges and v is equal to the vertices. While the use of Dijkstra’s Algorithm with the matrix and priority
queue will be O ((𝑉 2 ) + e log v) and in the worst case e ~ 𝑉 2 so that it will be O ((𝑉 2 ) + e log v) ~ O

51
Santosh Acharya (HND / Third Semester)
Data Structure and Algorithm 2019

(e log v). Dijkstra’s Algorithm is used along with the other algorithm because the complexity level
will be maintained by it. There’s a problem with this algorithm - it can only see the neighbors of the
immediate node. The issue that can arise is if you choose a short node that is forked. Since the algorithm
is not backtracking, it can potentially degrade into a infinite loop, especially since it will eventually
run out of suitable neighbors to inspect all while knowing that not all nodes have been visited. The
major problem of the algorithm is the fact that it does a blind search there by consuming a lot of time
waste of necessary resources. Another problem is that it cannot handle negative edges. This leads to
acyclic graphs and most often cannot obtain the right shortest path. Moreover, once thing we haven’t
looked at is the problem of finding shortest paths that must go through certain points. This is a hard
problem and is reducible to the Travelling Salesman problem–what this means in practice is that it can
take a very long time to solve the problem even for very small inputs.

2. Bellman-Ford algorithm
Same as the Dijkstra’s algorithm, Bellman- Ford algorithm is the way of finding shortest path from a
source vertex to all other vertices in a graph. That means, Bellman- Ford algorithm requires one source
vertex as input and then it will find out all possible or all shortest paths to all other vertices. The
Bellman-Ford algorithm supports the negative weighted values in the graphs too to calculate the
shortest path which is not supported by Dijkstra’s algorithm. It is the algorithm which follows dynamic
programming algorithm categories.
Imagine a scenario where you need to get to a baseball game from your house. Along the way, on each
road, one of two things can happen. First, sometimes the road you're using is a toll road, and you have
to pay a certain amount of money. Second, sometimes someone you know lives on that street (like a
family member or a friend). Those people can give you money to help you restock your wallet. You
need to get across town, and you want to arrive across town with as much money as possible so you
can buy hot dogs. Given that you know which roads are toll roads and which roads have people who
can give you money, you can use Bellman-Ford to help plan the optimal route.

52
Santosh Acharya (HND / Third Semester)
Data Structure and Algorithm 2019

Instead of your home, a baseball game, and streets that either take money away from you or give
money to you, Bellman-Ford looks at a weighted graph. The graph is a collection of edges that connect
different vertices in the graph, just like roads. The edges have a cost to them. Either it is a positive cost
(like a toll) or a negative cost (like a friend who will give you money). So, in the above graphic, a red
arrow means you have to pay money to use that road, and a green arrow means you get paid money to
use that road. In the graph, the source vertex is your home, and the target vertex is the baseball stadium.
On your way there, you want to maximize the number and absolute value of the negatively weighted
edges you take. Conversely, you want to minimize the number and value of the positively weighted
edges you take. Bellman-Ford does just this.

The algorithm initializes the distance to the source to 0 and all other nodes to infinity. Then, for all
edges, If the distance to the destination can be shortened by taking the edge, the distance is updated to
the new lower value. At each iteration that the edges are scanned, the algorithm finds all the shortest
paths of at most length i edges. Since the longest possible path without a cycle can be V-1 edges, the
edges must be scanned V-1 times to ensure the shortest path has been found all nodes. A final scan of
all the edges is performed and if any distance is updated, than a path of length |V| edges has been found
which can only occur if at least one negative cycle exists in the graph.

Working Mechanism of Bell-Ford Algorithm


Like other Dynamic Programming Problems, the algorithm calculate shortest paths in bottom-up
manner. It first calculates the shortest distances which have at-most one edge in the path. Then, it
calculates shortest paths with at-most 2 edges, and so on. After the i-th iteration of outer loop, the
shortest paths with at most i edges are calculated. There can be maximum |V| – 1 edges in any simple
path, that is why the outer loop runs |v| – 1 times. The idea is, assuming that there is no negative weight
cycle, if we have calculated shortest paths with at most i edges, then an iteration over all edges
guarantees to give shortest path with at-most (i+1) edges.

Steps to calculate shortest path using Bellman-Ford


The algorithm used to calculate the path in Bellman-Ford is given below:

Input: Graph and a source vertex src

Output: Shortest distance to all vertices from src. If there is a negative weight cycle, then shortest
distances are not calculated, negative weight cycle is reported.

53
Santosh Acharya (HND / Third Semester)
Data Structure and Algorithm 2019

 Step 1: This step initializes distances from source to all vertices as infinite and distance to source
itself as 0. Create an array dist[] of size |V| with all values as infinite except dist[src] where src is
source vertex.
 Step 2: This step calculates shortest distances. Do following |V|-1 times where |V| is the number
of vertices in given graph.
1. Do following for each edge u-v
If dist[v] > dist[u] + weight of edge uv, then update dist[v]
dist[v] = dist[u] + weight of edge uv
 Step 3: This step reports if there is a negative weight cycle in graph. Do following for each edge
u-v If dist[v] > dist[u] + weight of edge uv, then “Graph contains negative weight cycle”

The idea of step 3 is, step 2 guarantees shortest distances if graph doesn’t contain negative weight
cycle. If we iterate through all edges one more time and get a shorter path for any vertex, then there is
a negative weight cycle.

Let’s take an example to understand the concept more clearly:

Let us understand the algorithm with following example graph. The images are taken from this source.

Let the given source vertex be 0. Initialize all distances as infinite, except the distance to source itself.
Total number of vertices in the graph is 5, so all edges must be processed 4 times.

Let all edges are processed in following order: (B,E), (D,B), (B,D), (A,B), (A,C), (D,C), (B,C), (E,D).
We get following distances when all edges are processed first time. The first row in shows initial

54
Santosh Acharya (HND / Third Semester)
Data Structure and Algorithm 2019

distances. The second row shows distances when edges (B,E), (D,B), (B,D) and (A,B) are processed.
The third row shows distances when (A,C) is processed. The fourth row shows when (D,C), (B,C) and
(E,D) are processed.

The first iteration guarantees to give all shortest paths which are at most 1 edge long. We get following
distances when all edges are processed second time (The last row shows final values).

The second iteration guarantees to give all shortest paths which are at most 2 edges long. The algorithm
processes all edges 2 more times. The distances are minimized after the second iteration, so third and
fourth iterations don’t update the distances.

55
Santosh Acharya (HND / Third Semester)
Data Structure and Algorithm 2019

Bellman-Ford makes relaxations for every iteration, and there are |V|-1 iterations. Therefore, the worst-
case scenario is that Bellman-Ford runs in O(|V|.|E|) time.

However, in some scenarios, the number of iterations can be much lower. For certain graphs, only one
iteration is needed, and hence in the best case scenario, only O(|E|) time is needed. An example of a
graph that would only need one round of relaxation is a graph where each vertex only connects to the
next one in a linear fashion, like the graphic below:

Big O Notation

Application of Bell-Ford Man Algorithm


“A version of Bellman-Ford is used in the distance-vector routing protocol. This protocol decides how
to route packets of data on a network. The distance equation (to decide weights in the network) is the
number of routers a certain path must go through to reach its destination.

For the Internet specifically, there are many protocols that use Bellman-Ford. One example is the
routing Information protocol. This is one of the oldest Internet protocols, and it prevents loops by
limiting the number of hops a packet can make on its way to the destination. A second example is the
interior gateway routing protocol. This proprietary protocol is used to help machines exchange routing
data within a system” (Marshal, 2015).

56
Santosh Acharya (HND / Third Semester)
Data Structure and Algorithm 2019

Implementation of Bell-Ford in Java

57
Santosh Acharya (HND / Third Semester)
Data Structure and Algorithm 2019

Comparison of Dijkstra’s Algorithm with Bell-Ford Algorithm


Dijkstra’s Algorithm and Bell-Ford Algorithms both follows the same approach of dynamic
programming. They both works to find out the shortest path between source to destination point.
Dijkstra’s Algorithm only works on the positive weighted indices it means weight with negative values
cannot be measured using this algorithm. But, the Bell-Ford Algorithm is more flexible than Dijkstra’s
Algorithm because it supports both the positive as well as negative values of weight to find the path.
Secondly, Dijkstra calculates the only with the one best distance however Bell-Ford performs the check
on all the vertices. While calculating the path between the vertices, the Bell-Ford Algorithm checks all
the vertices for multiple times but in the Dijkstra’s Algorithm visited node wont be checked twice. The
vertexes in Dijkstra’s algorithm contain the whole information of a network. There is no such thing
that every vertex only cares about itself and its neighbors. On the other hand, Bellman-Ford algorithm’s
nodes contain only the information that are related to. This information allows that node just to know
about which neighbor nodes can it connect and the node that the relation come from, mutually.
Dijkstra’s algorithm is faster than Bellman-Ford’s algorithm however the second algorithm can be
more useful to solve some problems, such as negative weights of paths. The Bellman–Ford algorithm,
sometimes referred to as the Label Correcting Algorithm, computes single-source shortest paths in a
weighted digraph (where some of the edge weights may be negative). Bellman-Ford is in its basic
structure very similar to Dijkstra's algorithm, but instead of greedily selecting the minimum-weight
node not yet processed to relax, it simply relaxes all the edges, and does this |V| - 1 times, where |V| is
the number of vertices in the graph. The repetitions allow minimum distances to accurately propagate
throughout the graph, since, in the absence of negative cycles, the shortest path can only visit each
node at most once. Unlike the greedy approach, which depends on certain structural assumptions
derived from positive weights, this straightforward approach extends to the general case. The
functionality of Dijkstra's original algorithm can be extended with a variety of modifications. For
example, sometimes it is desirable to present solutions, which are less than mathematically optimal.
To obtain a ranked list of less-than-optimal solutions, the optimal solution is first calculated. A single
edge appearing in the optimal solution is removed from the graph, and the optimum solution to this
new graph is calculated. Each edge of the original solution is suppressed in turn and a new shortest-
path calculated. The secondary solutions are then ranked and presented after the first optimal solution.
Unlike Dijkstra's algorithm, the Bellman-Ford algorithm can be used on graphs with negative edge
weights, as long as the graph contains no negative cycle reachable from the source vertex s. (The
presence of such cycles means there is no shortest path, since the total weight becomes lower each
time the cycle is traversed.). How ever the Bellman-Ford algorithm has another draw back. The

58
Santosh Acharya (HND / Third Semester)
Data Structure and Algorithm 2019

Bellman-Ford algorithm does not prevent routing loops from happening and suffers from the count-
to-infinity problem. The core of the count-to infinity problem is that if A tells B that it has a path
somewhere, there is no way for B to know if it is on the path. To see the problem clearly, imagine a
subnet connected like A-B-C-D-E-F, and let the metric between the routers be "number of jumps".
Now suppose that A goes down. In the vector- update-process B notices that its once very short route
of 1 to A is down - B does not receive the vector update from A. The problem is, B also gets an update
from C, and C is still not aware of the fact that A is down - so it tells B that A is only two jumps from
it, which is false. This slowly propagates through the network until it reaches infinity (in which case
the algorithm corrects itself, due to the "Relax property" of Bellman Ford). Moreover, Disjkstra’s
algorithm is more faster and efficient than bellman-ford algorithm but can be used according the nature
of the problem.

Conclusion
As the analysis shows the Bellman-Ford algorithm soles a problem with a complexity of 27n2 but the
Dijkstra's algorithm solves the same problem with a lower running time, but requires edge weights to
be non-negative. Thus, Bellman– Ford is usually used only when there are negative edge weights. Both
of these functions solve the single source shortest path problem. The primary difference in the function
of the two algorithms is that Dijkstra's algorithm cannot handle negative edge weights. Bellman-Ford's
algorithm can handle some edges with negative weight. It must be remembered, however, that if there
is a negative cycle there is no shortest path. Hence, having the some major positive and negative things
of these algorithms they both are used equally as per the problem. But, Dijsktra’s can be indicated as
the more easier, faster and efficient algorithm among two.

59
Santosh Acharya (HND / Third Semester)
Data Structure and Algorithm 2019

Part 2: Write an article based on the following key aspects which will be published in an IT
magazine.

 Using an imperative definition, specify the abstract data type for a software stack.
 Examine the advantages of encapsulation and information hiding when using an ADT.
 Discuss the view that imperative ADTs are a basis for object orientation and, with justification,
state whether you agree.

60
Santosh Acharya (HND / Third Semester)
Data Structure and Algorithm 2019

Abstract data type and Object oriented programming


Well, Problem solving with a computer means processing data. To process data, we need to define
the data type and the operation to be performed on the data. The definition of the data type and the
definition of the operation to be applied to the data is part of the idea behind an abstract data type -
encapsulate the data and the operations on the data and hide them from the user. Object oriented
paradigm has emerged as a dominant programming technology. And with ADT it offers pretty
amazing things in terms of code structure.

ADT Specification which has specifications about how the data is


Abstract Data type are the mathematical model going to stored and what are the operations can
of the data objects that make up a datatype as be performed on these data types and different
well as the functions that operates on these entities but the implementation depends upon
objects. ADTs are also known as entities that the programming language that we use. The
are definitions of data and operations but do not specification of ADT or Data Specification is
have implementation details. This means that everything the client is permitted to know about
we know what we are storing, the operations it. ADT’s implementer should have to
that can be performed on the stored data and the implement the code that matches the
way the data is going to be stored depending specification precisely. Mainly, ADT’s
upon what data structure we are going through Specification are classified into two parts:
but we haven’t yet implemented it on practical
1. Signatures: The Signatures summarizes
in a sense. The reason behind the
the name of the data types, name of the
implementation part in ADT is that every
operation over the given data types and the
different programming language have different
operation argument’s and result types. But,
way of implementation. For example: A
In the most of the programming languages
particular Data Structure in C can be
signatures are written down formally.
implemented using the concept of structures
Example file istack.h is the definition of
but that seems data structure can be
ADT’s specification.
implemented by using the concept of objects
2. Axioms: Axioms are used to specify the
and class in Java Programming. Thus, different
behavior of the operations. Some
programming language use different
programming languages make it easier to
implementation strategies who tackled
formally write down such axioms; even
different abstract data types. So Basically, Data
fewer of them ask you to do so and check
Structures are special Abstract Data Types
them at compile- or run-time. In our

61
Santosh Acharya (HND / Third Semester)
Data Structure and Algorithm 2019

example we have informally written down Stack


some axioms - as C comments. Axioms Stack is the data structure or container of data
specify: objects that are inserted and removed according
Conditions under which it is legal to to the Last in First out (LIFO) principle. A
invoke operations Stack is a non-primitive linear data structure. It
Restrictions on the behavior of those is an ordered list where the new item is added
operations and existing element is deleted from only one
end, called as the top of the stack (TOS). As all
The main logic behind the implementation of
the deletion and insertion in a stack is done
the ADT is to keep the users unknown about the
from the top of the stack, the last element added
memory management of the data along with the
will be the first to be removed from the stack.
algorithm used on it. They are called abstract
That is the reason why stack is called Last-in-
because they are the implementation of the
First-out (LIFO) type of list.
independent view. Stack is the example of
ADT, here we are going to explain it more
briefly.

Fig: Representation of Stack

Stack ADT allows all data operations at one accessed first. In stack terminology, insertion
end only. At any given time, we can only access operation is called PUSH operation and
the top element of a stack. Here, the element removal operation is called POP operation. The
which is placed (inserted or added) last, is push operation helps to insert the element in the

62
Santosh Acharya (HND / Third Semester)
Data Structure and Algorithm 2019

container or stack and pop operation helps to know that, A data type is considered as ADT if
remove the element from the container. they are supposed to defined in terms of
Example of Stack can be taken as: Every one declaration of data , operation that are performed
of us may eat biscuits (or Poppins). If we with hiding the inner information while
assume, only one side of the cover is torn, and implementation. Stack follows both the
biscuits are taken out one by one. This is what principles of Abstract Data Types thus stack is
is called popping, and similarly, if we want to listed as the ADT.
preserve some biscuits for some time later, we
Additionally, Stack is considered as Last in
will put them back into the pack through the
First Out (LIFO) approach algorithm because
same torn end is called pushing. Thus, the
the operation can be performed from the only
process used LIFO mechanism so it can be
one side of the stack i.e. top of the stack. To
taken as the example of stack.
perform the operation, there are some functions
Why Stack is an ADT? of stack like PUSH (), POP (), PEEK(), isFull(),
As we have discussed about ADT in previous isEmpty(). It hides the implementation method
part, Abstract Data Type are the mathematical on both the array ad linked list but it also helps
modules that includes data with various in organizing the data for the efficient
operations but the implementation detail are management and retrieval of the data. Also
hidden. There are various examples like stack, stack don’t care about the type of the functions
queue, link list, Disjoints Sets, Binary tree etc. like push pop, delete are executed on the
that can be considered as an ADT. They are program but the program don’t care what is
considered as the ADT because the most beigh push, pop, isDelete etc. Only the
important feature of them is to perform operation commands/functions are executed but the
without showing the implementation internal operation are hidden thus the stack can
information, which means the process of be considered as AST. Also, to be an ADT the
encapsulation. Among them, talking about the data types should not have to care about the
Stack. Stack also follows the same approach of type of the data. For example: integers
encapsulation, the operations are performed but supports the operations like addition,
the inner operations are not visualized and the multiplication, division etc. but the
implementation will not be dependable directly multiplication of the string cannot be done on
in the programming language. Mainly ADT the string data types. Since the operator cannot
consist the two major types to be as the ADT. be called as the ADT. From the illustration
First one is the declaration of the data and second below, the concept of ADT will be clearer:
one is the declaration of the operation. As we

62
Santosh Acharya (HND / Third Semester)
Data Structure and Algorithm 2019

the container, here in the program, digit of 45,


letter ‘str’ and ‘String’ are inserted in the
container. But we don’t know how they are being
inserted or pushed. The functions Push(), Peek()
are executed but information will be hidden and
program does not care what we have inserted or
pushed in the program. Another Example of
Stack is the x87 floating point architecture which
As seen in the illustration above, we can understand is example of a set of registers organized as a
an abstract data type and how it operates. The user stack where direct access to individual registers
interacts with the interface using the operations that (relative the current top) is also possible. As with
have been specified by the abstract data type. The stack-based machines in general, having the top-
abstract data type is the shell that the user interacts of-stack as an implicit argument allows for a
with and implementation is hidden one level deeper. small machine code footprint with a good usage
The user won't know the details of
the of bus bandwidth and code caches, but it also
implementation. Same Concept is used in the Stack, prevents some types of optimizations possible on
too. Let's be clearer with the stack example: processors permitting random access to the
register file for all (two or three) operands. A
stack structure also makes superscalar
implementations with register renaming (for
speculative execution) somewhat more complex
to implement, although it is still feasible, as
exemplified by modern x87 implementations.
Sun SPARC, AMD Am29000, and Intel i960 are
In the above example, basic functions of stacks
all examples of architectures using register
are used like push, peek etc. Also, linked list in
windows within a register-stack as another
the program is used. No matter, what is being
strategy to avoid the use of slow main memory
processed inside the program, but the push and
for function arguments and return values. But
pop operations will be executed and provide
the main point is that, how they are operated,
result after execution but the user don’t know
how it functions inside the processor, or register
how push or peek works inside the algorithm or
and dedicated memory are not shown or
program. Push refers the insertion of elements in
displayed outside.

63
Santosh Acharya (HND / Third Semester)
Data Structure and Algorithm 2019

The implementation of the stack, also requires encapsulation in an ADT is that it allows to make
that we have to provide a physical view of the the program more complex. In simple word the
data using some collection of programming programmer does not have to learn about the
constructs and primitive data types. As we technical knowledge of the program about how
discussed earlier, the separation of these two they works & all they needs to do is implement
perspectives will allow us to define the complex them according to the problems. In the
data models for our problems without giving any encapsulation the user will only know about the
indication as to the details of how the model will well-defined interface of the application and the
actually be built. This provides an data & functions are wrapped on the single units.
implementation-independent view of the data.
Abstraction/Information hiding: Abstraction
Since stack is used in various programming
is nothing but it is abstracting the data. Data
language, and this implementation independence
Abstraction also is an information hiding which
allows the programmer to switch the details of
provides only essential information to the outside
the implementation without changing the way the
world and hiding their background details. It will
user of the data interacts with it. The user can
present the abstract only that means only idea
remain focused on the problem-solving process.
will be present rather than presenting internal
Thus stack, can be considered as the Abstract
functioning. In other words, Information Hiding
Data Type (ADT).
is the way to use abstract data types and abstract
Advantages of encapsulation and information objects, such as provided by ADA packages,
hiding in ADT: Information hiding is the feature Simula Packages, C++ classes. The user of data
of creating the classes or the component of the type need not know that data type is
application in such a manner that they are not implemented, for example, we have been using
able to use by the clients or any invalid persons. int, float, char data types only with the
This is known as the information hiding. knowledge with values that can take and
Encapsulation is also one of the method of operations that can be performed on them
information hiding that helps in creating the without any idea of how these types are
boundaries of the programs but it cannot be implemented. So, a user only needs to know what
completely said as the information hiding. a data type can do but not how it will do it.
Mostly information hiding is the designing
For example: if you are using an ArrayList to
principle whereas encapsulation is the feature of
implement the widths then the clients/user will
the programming language. Information hiding
not know about it but still if they tries they can
helps in hiding the design of the decision among
find out the data types being used. In the concept
the rest of the system. The main feature of

64
Santosh Acharya (HND / Third Semester)
Data Structure and Algorithm 2019

of OOP information hiding is the ability to hide Advantages of Information Hiding:


the object details, states, behaviors, from the
It simplifies the concept of the Object
users. Users refers to the object of the another
Oriented models.
class which we does not allowed to share the
They helps in providing the flexibility to the
information. Initially the concept of the
program by allowing the programmers to
information was built to reduce the
modify the functionality of the computer
interconnection of the system that helps in
program during the normal evolution.
facilitate the splitting of the system into the
It helps in the prevention of the system
modules while maintaining the user-friendly
design changes by the hidden of the certain
external interfaces. If the information hiding
portion of the codes.
process is done well then the changes that is
Hiding the information that is unnecessary
made to the hidden portion of a module should
to the particular level of the abstraction
not affect anything outside the module. This
within the final software system allows
allows the software engineers to more readily
software engineers to better understand,
manage the changes and the requirements of the
develop and maintain the software.
software (ND, 2011). Information hiding is
Maintains the higher level of the abstraction
considered to be the most sensitive and violent
in the software, making the software more
content of a program where the manipulation of
comprehensive.
the program can result on the incorrect outputs
Information hiding can be used on the aspect
and harms the integrity of the data. It also reduces
of the confidentiality, copyright protection,
the complexity of the system. For example:
nonrepudiation, and antifake and data
supposed you have the data member balance
integrity.
inside the class CheckAccount. Balance of the
account is the sensitive information so the Encapsulation: Encapsulation is the process of
outside application can be allowed to check the collecting the bunch of the stuffs together and
balance but they must not be allowed to alert the then putting them on the box, or the capsule. The
balance attribute. Thus declaring a balance implementation detail on the encapsulation
attribute with the private access modifier will method is not known although the access control
allows to hide the information within the outside can be maintained on the encapsulation
applications (ND, 2016). technique. Encapsulation helps in binding all the
data members and the member function into the
single unit that helps in controlling the
corruption of the data. Encapsulation does not

65
Santosh Acharya (HND / Third Semester)
Data Structure and Algorithm 2019

deals with the internal structure of the program Benefits of encapsulation:


they just helps in combing the program in the
Encapsulation is very important on object to
single unit. In the concept of the encapsulation
object relationship. Each object on the program
the mechanism of the components can be
will contains of the two views an internal view
improved, chanced, and replaced without
and external view. They are also the concept of
having any impact on the other components that
having the protection for the application so they
supports the same interface. Encapsulation is
can also be called as the information hiding
mostly achieved by the information hiding
methods. The internal implementation of the
which is the process of hiding all the secrets of
program is not known to everyone immediately
the object that do not contribute to its essentials.
the user will be familiar of only the interface not
The concept of the encapsulation was built to
the application/implementation of the programs.
control the access of the underlying data in order
There is maintained the data integrity by hiding
to reduce the system complexity and protect the
the data and providing the method to gain the
data from the modification through the clients
access to the program. The benefits is that there
(Beal, 2018). To maintain the encapsulation of
will be less interdependencies between the
the program it can be obtained the access
software components which is done by reducing
modifiers private, public and protected. Private
the complexity and increase on the robustness of
members are only accessible to the objects of the
the system.
class, public members are accessible to the
object of the class as well as outside from the Difference between information hiding and
class too. Encapsulation helps the end users of Encapsulation:
the system to learn what to do with the system Information hiding and encapsulation are the
instead of how it must do. For example: If a important concept of the OOP that helps in
driver needs to change the gear of the car then maintaining the security of the programs.
he just have to pull the position of the liver Encapsulation helps in wrapping the data
operating the gears of the car and the car will members and the data functions which creates the
automatically changes the gear. So basically the boundaries. This helps in using those data
driver does not need to understand all the members and functions within the class and the
complexity and mechanism inside it so this is method name can only describe what action can
how encapsulation helps in reducing the be performed on the object class. On the other
complexity of the system and makes easy for the hand information hiding helps in protecting the
end users. member class from the illegal or unauthorized
access. The main difference of encapsulation and

66
Santosh Acharya (HND / Third Semester)
Data Structure and Algorithm 2019

information hiding is that encapsulation helps in other classes and then it can be accessed only
maintaining or hiding the complexity of the through the methods of the classes.
system mean while information hiding helps in
Imperative ADTs are a basic for object
the security of the information/data. The complex
orientation:
data are focused to be hide on the encapsulation
Imperative are the ones that are rather applied
while on information hiding process it focuses on
and has different meaning at times. For example,
restriction or permitting the use of the data inside
an object can have different things populated at
the capsule. The use of the access modifiers on
different times. The class is an abstract type and
information hiding must be always private but on
does not hold the meaning unless its populated to
the encapsulation the access modifier will be
be an object. So a class is also a good example of
public or private according to the needs of the
an abstract data type which unless populated has
programs. Data hiding is the process as well as
nothing but a abstract meaning. Abstract Data
the technique of maintaining the security of the
Type is a mathematical model of a Data Structure
program on the other hand encapsulation is just
act as the container which holds a finite number
the sub process of data hiding that helps in hiding
of objects where the objects may be associated
the complexity of the program. Encapsulation in
through a given by binary relationship. Abstract
the java can also be achieved by the help of the
Data Type supports the feature of Object-
access modifiers. They can be achieved by using
Oriented Programming Paradigm, in which the
the private, public and protected access
objects are stored and on the given objects the
modifiers. As encapsulation is the concept of
operation is performed. ADT helps to specify the
binding all the data members and the data
logical properties of data type and it describes
function into the single units they helps in bind
what data can be stored (the characteristics of
all the things that is necessary. As the private
ADT) and how it can be use (the operation) but
member function are not allowed to be accessible
not the implementation detail of the objects.
outside the class in which they have been
Abstract data type supports mainly encapsulation
declared they have the concept of the
and data hiding feature. ADT are specified on
encapsulation. Protected data members are the
what the operation is needed to be performed not
members which helps in the accessible of the
how the operation will be performed. ADT does
class inside the class in which they are defined
not deals with the algorithms, memory
and then class which extends this classes. The
consumptions of the data but just the concept will
public data members will be accessible
be used to solve the problem. At the same time
anywhere. It can be achieved by making the
OOP is the programing model that helps in
members of the class private or hidden from the
organizing the things with the help of the objects

67
Santosh Acharya (HND / Third Semester)
Data Structure and Algorithm 2019

rather than on the actions and logics. Objects are Due to use of these strong features, during these
the real world entities that defines the data it days, it is commonly used in mainstream
contains and the logic which can be manipulated. software development. Java, C++, JavaScript,
In the concept of the OOP it not just only define PHP are some examples of object-oriented
the data type of the data structure but also the programming languages.
types of the operation that can be applied on the
The fundamental idea behind object-oriented
data structure. And data structure are the objects
programming is to combine data and function
that includes both the data and functions that are
into a single unit, such a unit is called an object.
needed to be operate. The relationship can be
An object’s function is called member function
built among one objects and many objects. OOP
and the data re only accessible from member
and ADT can be viewed as the complementary
function. An OOP program typically consists of
topics where objects is the common things on
a number of objects, which communicate with
them. Additionally, imperative means those that
each other by calling one another’s member
can be implemented differently at the different
functions. High-end jobs with well-paid salaries
times meaning at different time and hence has the
are offered around the world for experts in
different meaning at different times. Classes has
software development using object-oriented
different meanings at different times as its
programming language.
abstract and only when an object is build and can
have a solid value and we can have different Features of Object-Oriented Programming
objects with different values from the same class. Language
Even the inheritance will make the classes 1. Improved software-development
completely different unrelating the objects being productivity: Object-oriented programming
created. So, an imperative ADT is a class in an is modular, as it provides separation of duties
object oriented programming which is why as in object-based program development. It is
classes and objects are basic building blocks of also extensible, as objects can be extended to
OOP hence ADT are basis for OOP. include new attributes and behaviors. Objects
can also be reused within an across
Object Oriented Programing Language
applications. Because of these three factors –
The object-oriented programming is a
modularity, extensibility, and reusability
programing paradigm (model) that uses “object”
object-oriented programming provides
to design applications and computer programs. It
improved software-development
utilizes several techniques from previously
productivity over traditional procedure-based
established paradigms, including modularity,
programming techniques.
polymorphism, inheritance and encapsulation.

68
Santosh Acharya (HND / Third Semester)
Data Structure and Algorithm 2019

2. Improved software maintainability: For polymorphism, can be challenging to


the reasons mentioned above, object oriented comprehend initially.
software is also easier to maintain. Since the 2. Larger program size: Object-oriented
design is modular, part of the system can be programs typically involve more lines of
updated in case of issues without a need to code than procedural programs.
make large-scale changes. 3. Slower programs: Object-oriented
3. Faster development: Reuse enables faster programs are typically slower than
development. Object-oriented programming procedure based programs, as they typically
languages come with rich libraries of objects, require more instructions to be executed.
and code developed during projects is also 4. Not suitable for all types of problems:
reusable in future projects. There are problems that lend themselves
4. Lower cost of development: The reuse of well to functional-programming style, logic-
software also lowers the cost of development. programming style, or procedure-based
Typically, more effort is put into the object- programming style, and applying object-
oriented analysis and design, which lowers oriented programming in those situations
the overall cost of development. will not result in efficient programs.
5. Higher-quality software: Faster
Some of the common features of the OOP are:
development of software and lower cost of
development allows more time and resources  Encapsulation
to be used in the verification of the software.  Inheritance
Although quality is dependent upon the  Polymorphism
experience of the teams, object oriented  Abstraction
programming tends to result in higher-quality
Data Encapsulation
software.
Encapsulation is
Limitations/Disadvantages of OOP: the process that
1. Steep learning curve: The thought process allows selective
involved in object-oriented programming hiding data and
may not be natural for some people, and it functions in a class. All communication to an
can take time to get used to it. It is complex object is done via message. The object, which a
to create programs based on interaction of message is sent to, is called receiver of the
objects. Some of the key programming message. Message define the interface to an
techniques, such as inheritance and object. Providing access to an object only
through its message while keeping the details

69
Santosh Acharya (HND / Third Semester)
Data Structure and Algorithm 2019

privately is called information hiding. It ensures Inheritance


that only authorized functions access the relevant
Inheritance is the
data thereby maintaining against unauthorized
property that allows
access to ensure data safety.
the reuse of an
Encapsulation differs from the abstraction by existing class to build
hiding the data for the purpose of the security by a new class. The
binding them onto the single unite. Encapsulation principle in this sort of division is that each
is similar to the black box for users and can be subclass shares common properties with the class
compared as the shield that prevents the data from which it is derived. We can take an example
being accessed by the code outside the shield. of vehicle, All the vehicles in a class may share
The hidden class can be only accessed through similar properties of having wheels and a motor.
the member function of the own class in which Objects and classes extend the concept of
they are declared. Data in the encapsulation is abstract data types by adding the notion of
being hidden so that it is also called as the data- inheritance. Such classes inherit their behavior
hiding. The advantages of encapsulation are data from parent classes or base class.
hiding, increases the flexibility of the code,
Type of Inheritance
reusability of the code, makes the testing of the
code easier etc. The field that is being
encapsulated cannot be modified it means to say
that it can make the data to be read-only and
write-only. This is the benefits of the
encapsulation. Also a class that is being
encapsulated can have the total control over what
is being stored on its fields. 1. Single Inheritance: In single
inheritance, one class inherits
the properties of another. It
enables a derived class to
inherit the properties and
behavior from a single parent
class. This will in turn enable code reusability as
well as add new features to the existing code.

70
Santosh Acharya (HND / Third Semester)
Data Structure and Algorithm 2019

Here, Class A is your parent class and Class B is


your child class which inherits the properties and
behavior of the parent class.

3. Hierarchical Inheritance

When a class has more than one


child classes (sub classes) or in
other words, more than one
2. Multi Inheritance: Under this type, the
child classes have the same
derived class has
parent class, then such kind of
several bases. If a
inheritance is known as hierarchical. If we talk
child class is built
about the flowchart, Class B and C are the child
from two or more
classes which are inheriting from the parent class
than two parent
i.e Class A.
classes, then this type of inheritance is called
multi inheritance. When a class is derived
from a class which is also derived from
another class, i.e. a class having more than one
parent class but at different levels, such type
of inheritance is called Multilevel Inheritance.
If we talk about the flowchart, class B inherits
the properties and behavior of class A and
class C inherits the properties of class B. Here
A is the parent class for B and class B is the
parent class for C. So in this case class C
implicitly inherits the properties and methods
of class A along with class B.

Fig: An example of Java Inheritance

71
Santosh Acharya (HND / Third Semester)
Data Structure and Algorithm 2019

Polymorphism

Polymorphism enables the same function to


behave differently on the different classes.
Object oriented languages try to make existing
code easily modifiable without actually changing
the code. Polymorphism means that the same
functions may behave differently on different
Fig: Run time polymorphism
classes. It is an important feature of object
oriented program. The use of polymorphism can Compile Time Polymorphism: In Java, compile
be both Static ns Dynamic. Method Overloading time polymorphism refers to a process in which
is the static polymorphism whereas Method a call to an overloaded method is resolved at
Overriding as the dynamic polymorphism. compile time rather than at run time. Method
Overloading in the simple word can be said as overloading is an example of compile time
more than one method having the same method polymorphism. Method Overloading is a feature
name that behaves differently based on the that allows a class to have two or more methods
arguments passed while calling the methods and having the same name but the arguments passed
then Overriding is the derived class that is to the methods are different. Unlike method
implemented as the method of super class. overriding, arguments can differ in:

Runtime Polymorphism in Java Number of parameters passed to a method


In Java, runtime polymorphism refers to a Datatype of parameters
process in which a call to an overridden method Sequence of data types when passed to a
is resolved at runtime rather than at compile- method.
time. In this, a reference variable is used to call
an overridden method of a superclass at run time.
Method overriding is an example of run time
polymorphism. Let us look the following code to
understand how the method overriding works:

Fig: compile time polymorphism

72
Santosh Acharya (HND / Third Semester)
Data Structure and Algorithm 2019

Abstraction class Customer


{
Data abstraction is the process of identifying
int account_no;
properties and methods related to a particular float balance_Amt;
entity as relevant of the application. Thus it is the String name;
int age;
process of examining all the available String address;
information about an entity to identify void balance_inquiry()
{
information that is relevant to the application. It /* to perform balance inquiry only
is a collection of data and methods also refers to account number
is required that means remaining
the act of representing the essential features properties
without including the background details or are hidden for balance inquiry
method */
explanations. An abstract data type is }
programmer-defined data type that can be void fund_Transfer()
{
manipulated like system-defined data types. For /* To transfer the fund account
example, in a ‘switch-board’ you only press number and
balance is required and remaining
certain switches according to our requirement. properties are hidden*/
What is happening inside, how it is happening, }
}
we don’t know, this is called abstraction.

In java the abstraction property is accomplished


by the help of the abstract classes and methods.
Fig: An example to show data abstraction in java
Abstract class is the class that is declared with
“abstract” is called abstract class and also Abstraction used to display only the important
contains of the abstract method as well as things that is necessary to the interface.
concrete methods. In the same way abstract Abstraction’s advantages of abstraction are it
method is the method that does not contains reduces the complexity of the design and
body. They must be declared on abstract classes. implementation process of the software. Also,
They also helps in the selection of the data from another benefits of using abstract class is that it
a larger pool to show only the relevant details to allows to group the several related classes as its
the objects. Selection of the data from the larger siblings. Abstraction helps in defining the object
data automatically means that to cutoff the data as the properties, functionality, and interfaces.
that are not relevant or to hide them.
OOP and ADT arguments.

Both ADT and OOP can be taken as the way of


understanding implementation of a data

73
Santosh Acharya (HND / Third Semester)
Data Structure and Algorithm 2019

abstraction. Abstract data types are often called they are not encapsulated from each other. ADT
as the user defined data types because the allows to create the instances with well-defined
programmers are allowed to define the new types properties and behaviors while abstraction allows
of the primitive data types more and more. The to collect instances of entities into groups in
concept of OOP lies on the application of the which their common attributes are needed to be
objects which can be the collection of the considered. The concept of OOP is based on the
methods or the procedures that share the access declaration of the instances on ADT called
to the private local state. Objects refers to the real Object. In the OOP, ADT is considered to be the
world entities and well known mathematical classes so that the classes are used to define the
operations. ADT and OOP are the different properties of the objects. ADT are less extensible
mechanism/paradigms of achieving the goals of and flexible because it contains a lot of the
the programs but they are distinguished on the security factors and the classes on ADT are
mechanism, techniques and there uses of the highly interdependent on each other. Despite of
abstract data. The mechanism to achieve the this ADT supports the verification and
barriers of the abstraction between the clients and optimization of the programs. OOP
the data are different in ADT and OOP. The implementation of abstract data are just opposite
primary mechanism of the ADT is abstraction than ADT.
while on OOP it is OOP abstraction. In an ADT
Abstract data type is the most fundamental and
the data is abstracted by the virtual method i.e.
generalized concept in programming. It is
client used the data type to declare the variables
composed of type of data and operations
but cannot be inspected directly on the other hand
associated with it. Object orientation is a
on OOP data is being abstracted because it can be
programming paradigm in which programs are
accessed from the interface. On OOP user will be
organized around data and objects rather than
known about all the data types but in ADT it is
functions and logic. Implementation of abstract
unknown.
data type includes choosing a particular data
The main difference between ADT and OOP is structure. An ADT is an interface: it is just a
the use of the technique that enforces the program collection of methods, their type-signatures,
to use the properties of encapsulation and possibly with pre-and-post conditions.
abstraction. As in OOP objects are considered to
The main idea of abstract data type is abstraction.
be the real world objects so they acts like an
In object-oriented programming, we implement
interface and is encapsulate among each other
abstract data type as classes. Classes uses the
whereas in ADT the abstract values are all
concept of data abstraction known as abstract
enclosed on the single abstract abstraction so

74
Santosh Acharya (HND / Third Semester)
Data Structure and Algorithm 2019

data type. A class can implement one or more the Object Oriented Programming where the
ADTs, by giving actual implementations for the information hiding focuses on the security of the
methods specified in the ADT. ADT just guides information whereas the encapsulation process
and focuses on solving the problem while it helps in maintaining the complexity of the
doesn't cares on how the information type is program and makes the application more user
being implemented on the program. It utilizes friendly.
particular sorts of the data structure for its
Encapsulation executes data members and their
implementation. The utilization of the ADT is
methods within the class and focuses more on
only the idea to utilize the data structures while
hiding the complexity of the system. The purpose
the utilization of the OOP is the kind of
of the package is to facilitate the use of the
programming idea that depends on ADT. So we
software and not complicate it when
can say that ADT is a basis for object orientation
implemented. On the other hand, the program's
programming.
data is easily hidden by hiding and protecting
Conclusion: Stack execution is important when data from the outside world in case of data
single-sided operation is required only for LIFO corruption. The goal of the software is
operations. To undo, undo programs, etc. This encapsulation, while concealing data /
type of function can be used. It is better than information is the way to achieve encapsulation
head-based memory consumption, and it is better properties. The package is similar to the
when you have to perform the temporary data container where all members and data routes will
function and once the pop-up function is created, be saved. Encapsulation is possible without
it will be automatically deleted. It is an ADT concealing the data, while concealing the data is
because it can perform the operation, and it is not not possible without encapsulation. The benefits
necessary for this function to be performed so behind using OOP programming language it that
that it does not appear at all. You can also they contains of the features like reusability,
perform the function while the data is activated, refactoring, extensibility, maintenance and
which causes them to be called abstract data efficiency of the code. Along that the OOP has
types. It is also easy to understand the process in the features like polymorphism, encapsulation,
the stack, that is, add items in the stack, while pop abstraction, inheritance etc. that helps in
means removing the item from the stack. Those managing the codes and then making the
operation is allowed to perform only from one program more flexible. ADT simply tells what to
end of the stack and that end is known as the top do and implements on the problems to solve the
of the stack. Information hiding and problem while it does not cares on how the data
encapsulation both are the important concept of type is being implemented on the program. It

75
Santosh Acharya (HND / Third Semester)
Data Structure and Algorithm 2019

uses certain types of the data structure for its that is based on ADT. So we can say that OOP
implementations. The use of the ADT is just the uses the concept of ADT.
concept to use the data structures whereas the use
of the OOP is the type of programming concept

76
Santosh Acharya (HND / Third Semester)
Data Structure and Algorithm 2019

Part 3: Prepares a formal written report that includes the following:


Demonstrate the implementation of a complex ADT and algorithm in an executable programming
language to solve a well-defined problem and also include error handling and report test results.
Here explain different approaches to handle the error handling exception handling. Provide a
program implementing try, catch and finally block with different results after executing that
program.
Demonstrate how the implementation of an ADT/algorithm solves a well-defined problem and
critically evaluate its complexity.
Discuss how asymptotic analysis can be used to assess the effectiveness of an algorithm. Here you
need to explain what asymptotic analysis of an algorithm is and how it can be used to find
effectiveness of an algorithm and determine two ways in which the efficiency of an algorithm can
be measured, illustrating your answer with an example.
Interpret what a trade-off is when specifying an ADT using an example to support your answer.
Write concluding remarks that evaluates the benefits of using implementation independent data
structure.

1
Santosh Acharya (HND / Third Semester)
Data Structure and Algorithm 2019

ANALYSIS OF
COMPLEX ADT AND
APPROACH FOR ERROR
HANDELING

Formal Report

BY: SANTOSH ACHARYA

2
Santosh Acharya (HND / Third Semester)
Data Structure and Algorithm 2019

ABSTRACT
The definition of ADT only mentions what operations are to be performed but not how these operations
will be implemented. It does not specify how data will be organized in memory and what algorithms
will be used for implementing the operations. It is called “abstract” because it gives an implementation-
independent view. The process of providing only the essentials and hiding the details is known as
abstraction. This report describes the implementation of a complex ADT and an algorithm in an
executable programming language to solve a well-defined problem and explains various error handling
methods, including handling exceptions. An additional report critically assesses the complexity of the
algorithm. The report also analyzes the use of asymptotic analysis in evaluating the effectiveness of
the algorithm. The investigation draws attention to compensation, by identifying the ADT with the
example. Further research reveals realistic examples of organizations using business intelligence tools
to improve and improve their operations. Finally, the report assesses the benefits of using separate data
structures for implementation.

1. INTRODUCTION
The data structures that we use in applications often contain a great deal of information of various
types, and certain pieces of information may be belong to multiple independent data structures. For
example, a file of personnel data may contain records with names, addresses, and various other pieces
of information about employees; and each record may need to belong to one data structure for
searching for particular employees, to another data structure for answering statistical queries, and so
forth. Despite this diversity and complexity, a large class of computing applications involve generic
manipulation of data objects, and need access to the information associated with them for a limited
number of specific reasons. Many of the manipulations that are required are a natural outgrowth of
basic computational procedures, so they are needed in broad variety of applications.

ADT is a specification of the components that contain the data and all the necessary operations that
they must have. They are used in the design and analysis of algorithms, data structures and software
systems. Basically, there are three types of ADT: i.e. Stack ADT, Queue ADT and List ADT. Similarly,
in computers all data is stored in the form of variables. The static object for many variables that specify
the object and this variable is known as the Complex ADT because it cannot specifically determine
the type of ADT it uses. To handle a large amount of data automatically, the concept of an algorithm
that simplifies the structure of the data. Now let us understand the implementation of abstract data type
with the help of binary search tree to solve the well-defined problem.

3
Santosh Acharya (HND / Third Semester)
Data Structure and Algorithm 2019

Implementation of Complex ADT using binary search algorithm


Here, we have selected the searching algorithm as a part of the complex ADT implementation in a
Java programming language. There are various sorting as well as searching examples available to
use. But they have different in performance wise, efficiency wise and so on.

2.1 Tree
A tree structure is an algorithm for placing and locating files (called records or keys) in a database.
The algorithm finds data by repeatedly making choices at decision points called nodes. A node can
have as few as two branches (also called children), or as many as several dozen. The structure is
straightforward, but in terms of the number of nodes and children, a tree can be gigantic. Tree
represents the nodes connected by edges. We will discuss binary tree or binary search tree
specifically.

Binary Tree is a special data structure used for data storage purposes. A binary tree has a special
condition that each node can have a maximum of two children. A binary tree has the benefits of both
an ordered array and a linked list as search is as quick as in a sorted array and insertion or deletion
operation are as fast as in linked list.

Fig. Representation of Binary Search Tree

Binary trees are used when all the data is in random-access memory (RAM). The search algorithm is
simple, but it does not minimize the number of database accesses required to reach a desired record.
When the entire tree is contained in RAM, which is a fast-read, fast-write medium, the number of
required accesses is of little concern. But when some or all of the data is on disk, which is slow-read,
slow-write, it is advantageous to minimize the number of accesses (the tree depth). Alternative
algorithms such as the B-tree accomplish this.

4
Santosh Acharya (HND / Third Semester)
Data Structure and Algorithm 2019

Binary Search Algorithm


Binary search is the most popular Search Algorithm among the various search algorithms. It is
efficient and also one of the most commonly used technique that is used to solve the problems. All the
names in the world are written down together in order and you want to search for the position of a
specific name, binary search will accomplish this in a maximum of 35 iterations. Binary search works
only on a sorted set of elements. To use binary search on a collection, the collection must first be
sorted. When binary search is used to perform operations on a sorted set, the number of iterations can
always be reduced on the basis of the value that is being searched. In this algorithm, it sorts the array
or the array lists from the large size of the data. It is also known as the half interval search. It is called
so because it compares the target value to the middle element of the array. If they are not equal, the
half in which the target cannot lie is eliminated and the search continues on the remaining half of the
array taking the middle element to compare the targeted values. This is repeated until the value will be
found in the array or in the list.

Advantage and Disadvantages of Binary Search Tree Algorithm


The primary advantage of a binary search algorithm is that searching a sequence can be achieved in
logarithmic time. The primary disadvantages are that the data must be in sorted order. Arrays are the
ideal container for binary searches as they provide constant-time random-access and are therefore
trivial to both sort and search efficiently. Sorted binary trees can also be used, however for optimal
performance they must be totally balanced (e.g., red/black binary tree). Constant-time random-access
is not a requirement of binary trees, however the cost of maintaining balance during construction of
the tree has to be taken into account.

With a linear search, we start at one end of the sequence and traverse through the sequence one element
at a time until we find the value we're looking for, or we reach the element one-past-the-end of the
sequence (in which case the element we're looking for does not exist). For a sequence of n elements,
the worst case is O(n). Linear search is ideal for forward lists (singly-linked lists) and lists (doubly-
linked lists) as neither provides nor requires constant-time random-access.

With binary search, we locate the middle element in the sequence. If that's not the value we are looking
for, we can easily determine which half of the sequence contains our value because the elements are
in sorted order. So we eliminate the other half and repeat the algorithm with the remaining half. As
such, each failure to find the value reduces the number of elements to be searched by half (plus the
middle element). If there are no elements in the remaining half then the value does not exist. The worst

5
Santosh Acharya (HND / Third Semester)
Data Structure and Algorithm 2019

case is therefore O(log n). However, programming in binary search algorithm is very difficult and error
prone.

Binary Search Tree Implementation


Binary Tree Consists of Nodes and the nodes are nothing but objects of the class and each node has
data and a link to the left node and right node.

Usually we call the starting node as a tree as a root. Here is the sample class with the tree code.

Left and right node of a Leaf node points to NULL so you will know that you have reached to the end
of the tree.

6
Santosh Acharya (HND / Third Semester)
Data Structure and Algorithm 2019

Operations in Binary Search


The basic operations that can be performed on a binary search tree data structure, are the following :

Insert(int n) : Add a node the tree with value n. Its O(lgn)


Find(int n) : Find a node the tree with value n. Its O(lgn)
Delete (int n) : Delete a node the tree with value n. Its O(lgn)
Display(): Prints the entire tree in increasing order. O(n).
1. Insert Operation
The very first insertion creates the tree. Afterwards, whenever an element is to be inserted, first
locate its proper location. Start searching from the root node, then if the data is less than the key
value, search for the empty location in the left subtree and insert the data. Otherwise, search for
the empty location in the right subtree and insert the data. The algorithm for Insert Operation is
given below:

First, we it will check the root of the tree, and if the tree is empty then the new root will be created.
If the root already consist of node, then it will compare the data with the data of the data. First the
pointer or position is located and the it will start comparing the data. If the data is greater than the
data of the node then the pointer is move through the right subtree and if the data is smaller then it
will be in the left subtree. After the comparison the data will be inserted.

7
Santosh Acharya (HND / Third Semester)
Data Structure and Algorithm 2019

Example in pictorial form:

Implementation In java
The implementation of insert function should look like this :

8
Santosh Acharya (HND / Third Semester)
Data Structure and Algorithm 2019

2. Delete
It is more complicated than Find () and Insert () operations. Here we have to deal with 3 cases.
Node to be deleted is a leaf node (No Children).
Node to be deleted has only one child.
Node to be deleted has two childrens.

9
Santosh Acharya (HND / Third Semester)
Data Structure and Algorithm 2019

Node to be deleted is a leaf node (No Children). It's a very simple case, if a node to be deleted has no
children then just traverse to that node, keep track of parent node and the side in which the node
exist(left or right) and set parent.left = null or parent.right = null;

Node to be deleted has only one child.

It's a slightly complex case. if a node to be deleted(deleteNode) has only one child then just
traverse to that node, keep track of parent node and the side in which the node exist(left or
right).
Check which side child is null (since it has only one child).
Say node to be deleted has child on its left side. Then take the entire sub tree from the left side
and add it to the parent and the side on which deleteNode exist, see step 1 and example.

10
Santosh Acharya (HND / Third Semester)
Data Structure and Algorithm 2019

Node to be deleted has two children.

Now this is quite exciting 🙂 we just cannot replace the deleteNode with any of its child, why?
Let's try out an example.

3. Display
To know about how we are displaying nodes in increasing order, here is an example.

11
Santosh Acharya (HND / Third Semester)
Data Structure and Algorithm 2019

Complete Example of the Code:

12
Santosh Acharya (HND / Third Semester)
Data Structure and Algorithm 2019

13
Santosh Acharya (HND / Third Semester)
Data Structure and Algorithm 2019

14
Santosh Acharya (HND / Third Semester)
Data Structure and Algorithm 2019

Output of the Tree:

Binary Search Algorithm Implementation II


Following are the steps of implementation that we will be following:

1. Start with the middle element:


2. If the target value is equal to the middle element of the array, then return the index of the middle
element.
3. If not, then compare the middle element with the target value,
 If the target value is greater than the number in the middle index, then pick the elements
to the right of the middle index, and start with Step 1.
 If the target value is less than the number in the middle index, then pick the elements to
the left of the middle index, and start with Step 1.
4. When a match is found, return the index of the element matched.
5. If no match is found, then return -1

Complexity of an algorithm
The complexity of an algorithm defines the efficiency of an algorithm. Time complexity of an
algorithm signifies the total time required by the program to run till its completion. It is generally
estimated by counting the number of elementary steps to finish the execution. Space complexity can
be seen as amount of extra memory required to execute an algorithm. Both the time and space
complexity is given with respect to the input size. The space complexity of binary search tree depends
upon the height of an algorithm. Therefor the space complexity is O(logn).

15
Santosh Acharya (HND / Third Semester)
Data Structure and Algorithm 2019

Best case complexity


The binary search tree is developed on the idea of binary search algorithm which allows for fast lookup,
insertion and removal of nodes. The way that they are set up means that, on average, each comparison
allows the operations to skip about half of the tree, so that each lookup, insertion or deletion takes time
proportional to the logarithm of the number of items stored in the tree, O(log n). If the tree is perfectly
balanced, the situation is exactly like in binary search. A tree of N = 2x nodes takes maximum x
comparisons. That is, we have logarithmic access cost, O(log N).

In best case,

 The binary search tree is a balanced binary search tree.


 Height of the binary search tree becomes log(n). So, Time complexity of BST Operations
= O(logn).

Worst case complexity


Sometimes the worst case can happen where the tree is not balanced. If the tree is unbalanced, the
situation tends towards linear search. This can happen when we keep adding an element in a node
larger than its parent node, the same can happen when we always add nodes with values lower than
their parents. In such case the tree degenerates to a mere linked list. In worst case a tree of N nodes
takes maximum N comparisons. That is, we have linear access cost, O(N).Binary search tree is worst
to store sorted data values and is best to store random data values. In worst case binary tree is as good
as unordered list with no benefits.

In worst case,

 The binary search tree is a skewed binary search tree.

16
Santosh Acharya (HND / Third Semester)
Data Structure and Algorithm 2019

 Height of the binary search tree becomes n. So, Time complexity of BST Operations =
O(n)

In the above figure, Binary Search Tree consists of seven nodes which consists of value from 1 to 7.
And at each level, only a single nodes is present which increases the height of the BST. And this is the
maximum height that any Binary Search Tree can have for the seven node. Such Binary Search Tree
is called Skewed Binary Search Tree. This is the worst case of Binary Search Tree that anyone can
draw for the seven nodes. This Skewed Binary Search Tree does not allows to enjoy the benefits of
Binary Search Tree neither in terms of search or insertion nor deletion. It does not offer the benefit
that BST is made for. In Worst case, the Binary Search Tree is a Skewed Binary Search Tree and we
have to travel from root to the deepest of node. In that case, the height of the Binary Search Tree
becomes ‘n’. In worst case, Time Complexity for Binary Search Tree operations = O(n).In this case,
Binary Search Tree is as good as unordered list with no benefits.

Error Handling and Report Testing


An Error “indicates serious problems that a reasonable application should not try to catch.” Both Errors
and Exceptions are the subclasses of java.lang.Throwable class. Errors are the conditions which cannot
get recovered by any handling techniques. It surely cause termination of the program abnormally.
Errors belong to unchecked type and mostly occur at runtime. Some of the examples of errors are Out
of memory error or a System crash error. (Dought, 2017)

17
Santosh Acharya (HND / Third Semester)
Data Structure and Algorithm 2019

Example Showing Error in Java

Fig: Error

Output:

Exceptions: An Exception “indicates conditions that a reasonable application might want to catch.”
Exceptions are the conditions that occur at runtime and may cause the termination of program. But
they are recoverable using try, catch and throw keywords. Exceptions are divided into two categories:
checked and unchecked exceptions. Checked exceptions like IOException known to the compiler at

18
Santosh Acharya (HND / Third Semester)
Data Structure and Algorithm 2019

compile time while unchecked exceptions like ArrayIndexOutOfBoundException known to the


compiler at runtime. It is mostly caused by the program written by the programmer.

Example Showing Exception in Java

Output:

Error handling is the process of responding to and recovering from error conditions in your program.
The mechanism provides first-class support for throwing, catching, propagating, and manipulating
recoverable errors at runtime. Some operations aren’t guaranteed to always complete execution or
produce a useful output. Optional are used to represent the absence of a value, but when an operation
fails, it’s often useful to understand what caused the failure, so that the code can respond accordingly.
best programs of this type forestall errors if possible, recover from them when they occur without
terminating the application, or (if all else fails) gracefully terminate an affected application and save
the error information to a log file.

In programming, a development error is one that can be prevented with the help of error handling
mechanism. Such an error can occur in syntax or logic. Syntax errors, which are typographical mistakes
or improper use of special characters, are handled by rigorous proofreading. Logic errors, also called
bugs, occur when executed code does not produce the expected or desired result. Logic errors are best
handled by meticulous program debugging. This can be an ongoing process that involves, in addition

19
Santosh Acharya (HND / Third Semester)
Data Structure and Algorithm 2019

to the traditional debugging routine, beta testing prior to official release and customer feedback after
official release. A run-time error takes place during the execution of a program, and usually happens
because of adverse system parameters or invalid input data. An example is the lack of sufficient
memory to run an application or a memory conflict with another program. On the Internet, run-time
errors can result from electrical noise, various forms of malware or an exceptionally heavy demand on
a server. Run-time errors can be resolved, or their impact minimized, by the use of error handler
programs, by vigilance on the part of network and server administrators, and by reasonable security
countermeasures on the part of Internet users.

Error handling process is one of the important process in building the big applications because it let
user to know the errors in friendly manners. It lets that something has gone wrong in the application
and they should contact the technical support department or that someone from tech support has been
notified. This kinds of the error handling mechanism helps in making the user more interactive with
the help of the different messages that can be shown that allows the user to inform the technical support
teams. Error handling mechanisms also allows the programmers to debug the issues of the applications.
The messages that the application will display must be so friendly to the users so that they can also
make the decisions much easier.

Exception Handling
Exception handling is the most important feature of most of the OOP based programming languages,
especially in Java. It is the feature which allows us to handle the runtime errors caused by Exceptions.
Exception is an unwanted event that interrupts the normal flow of the program. When an exception
occurs program execution gets terminated. In such cases we get a system generated error message. The
good thing about exceptions is that they can be handled in Java. By handling the exceptions we can
provide a meaningful message to the user about the issue rather than a system generated message,
which may not be understandable to a user. Due to the several reasons that can cause a program to
throw exception. For example: Opening a non-existing file in your program, Network connection
problem, bad input data provided by user etc. If the exceptions occurs, which has not been handled by
programmer then the program execution gets terminated and a system generated error message is
shown to the user. Exception handling is the mechanism to handle the errors or the problems that
might be encountered in the runtime duration thus to protect the application through the immediate
crash as well as the problems caught by the poor program.

20
Santosh Acharya (HND / Third Semester)
Data Structure and Algorithm 2019

Advantage of Error and Exception Handling


The main advantages of the exception-handling mechanism in object oriented programming over the
traditional error-handling mechanisms are the following:

 The separation of error-handling code from normal code unlike traditional programming
languages, there is a clear-cut distinction between the normal code and the error-handling code.
This separation results in less complex and more readable (normal) code. Further, it is also
more efficient, in the sense that the checking of errors in the normal execution path is not
needed, and thus requires fewer CPU cycles.
 A logical grouping of error types Exceptions can be used to group together errors that are
related. This will enable us to handle related exceptions using a single exception handler. When
an exception is thrown, an object of one of the exception classes is passed as a parameter.
Objects are instances of classes, and classes fall into an inheritance hierarchy in Java. This
hierarchy can be used to logically group exceptions. Thus, an exception handler can catch
exceptions of the class specified by its parameter, or can catch exceptions of any of its sub-
classes.
 The ability to propagate errors up the call stack another important advantage of exception
handling in object oriented programming is the ability to propagate errors up the call stack.
Exception handling allows contextual information to be captured at the point where the error
occurs and to propagate it to a point where it can be effectively handled. This is different from
traditional error-handling mechanisms in which the return values are checked and propagated
to the calling function.

Exception Handling in Java


Exception handling is very important concept in Java Programming Language. While working on java
projects, many times we have to deal with it. Exception Handling is the dominant mechanism to handle
runtime breakdown. To prevent unexpected termination of the program, exceptions must be handled
and the feature is easy to understand and straightforward in use. There are some special keywords
available in java to handle the exception. They are:

 Try
 Catch
 Finally
 Throw and Throws

21
Santosh Acharya (HND / Third Semester)
Data Structure and Algorithm 2019

Try and Catch


Java try block is used to enclose the code that might throw an exception. It must be used within
the method. If an exception occurs at the particular statement of try block, the rest of the block
code will not execute. So, it is recommended not to keeping the code in try block that will not
throw an exception. Java try block must be followed by either catch or finally block. Java catch
block is used to handle the Exception by declaring the type of exception within the parameter.
The declared exception must be the parent class exception (i.e., Exception) or the generated
exception type. However, the good approach is to declare the generated type of exception. The
catch block must be used after the try block only. You can use multiple catch block with a
single try block.
Example of exception handled through try and catch block.

Output is: Expression throws divide by 0 exception.

Here, in the above program, we are trying to divide x by 0 which throws an exception and thus the
code is enclosed in the try block. Therefore, the catch block handles this exception and prints
Expression throws a divide by 0 exceptions.

Also, multiple catch handling concept is also available in java. There are few general rules that must
be kept in mind while using try block with multiple catch blocks:

 At a time only one exception occurs and only one catch block is executed.
 The multiple catch blocks must be ordered from most specific to most general.

22
Santosh Acharya (HND / Third Semester)
Data Structure and Algorithm 2019

Let's take an example of using multiple catch to handle the error in the program.

Output: Arithmetic Exception Handled.

Here, this is a different situation where both ArrayIndexOutOfBoundsException and


ArithmeticException occurs. Since the first catch is Arithmetic Exception, it will be caught there. And
the program control will be continued after the catch block.

Finally Block
It is used to execute the code which is of high significance. No matter what, if the finally block is
written, it is always executed. The finally consists of statements that must be executed at any cost,
whether the exception occurs or not. The finally block can be written either after the try-catch block
or can be directly followed after the try block.
Syntax of finally block to handle the exception.

Fig. Syntax to handle the exception using finally

23
Santosh Acharya (HND / Third Semester)
Data Structure and Algorithm 2019

Example of finally block to handle the exception.

Output: Exception throws divide by 0 exception. You are in the finally block. The above example
took the execution of the program through the catch and the finally block thus the random
termination and the possible errors are controlled there. The circumstances that prevent execution
of the code in a finally block are:

 The death of a Thread


 Using of the System. exit() method.
 Due to an exception arising in the finally block.

Throw and Throws


The throw keyword in Java is used to explicitly throw an exception from a method or any block of
code. We can throw either checked or unchecked exception. The throw keyword is mainly used to
throw custom exceptions. In the below example, we are using if- control statement.

We will create a method to check if the number is less than 0 or not. If the number is less than 0,
then we throw ArithmeticException, otherwise, we print division is possible.

24
Santosh Acharya (HND / Third Semester)
Data Structure and Algorithm 2019

Output: Exception in thread main java.lang.ArithmeticException: Invalid Input

Now, let us understand the concept of throws keyword with example.

Output: An arithmetic exception thrown Program executed

Here, in the above example, an exception is declared but not handled. If in such exception occurs,
then at run time the exception would be thrown but not handled while if there occurs no exception
then the n code executes normally. To avoid such cases, one can use the try-catch block to handle
the exception that is thrown.

In this way, errors and exceptions can be handled with using such keywords provided by java thus
the program will run smoothly without the sudden termination or the random shutdown cause by
the error.

25
Santosh Acharya (HND / Third Semester)
Data Structure and Algorithm 2019

Implementation of ADT/algorithm to solve well defined problem:


As part of ADT implementation we have taken Binary Search. Here, we are going to implement the
search tree to solve the specific problem.

Problem:

1. Build a Binary Tre

2. Find all paths to leaves (ie, 2–3–9, 2–


3–4, 2–8–7, 2–8–1).

3. Find the longest sequential path.

4. Invert the Binary Tree from the


earlier problem.

26
Santosh Acharya (HND / Third Semester)
Data Structure and Algorithm 2019

Solution Implemented in Programming Language:


1. Build a Binary tree

In the above code we have used, a queue to populate the tree because a queue allows me to traverse
level by level looking for leaves. Using this strategy, the tree will populate level by level from left
to right. Also, we have put all of the values in an array and iterating over them is a nice way to
keep your code looking clean.

2. Find all paths to leaves (ie, 2–3–9, 2–3–4, 2–8–7, 2–8–1)

3. Find the longest sequential path

Decision: Because problems 2 and 3 seemed to share a lot in common, I decided to use one function
to do both. The code is below:

27
Santosh Acharya (HND / Third Semester)
Data Structure and Algorithm 2019

In the above code, I’m using the slice method so that each new stack frame will have its own copy
of the currentPath. Using this strategy, when the stack frame is popped off the stack, the frame
under it will have a currentPath that accurately reflects its state. If currentPath was passed in
without using the slice method, the underlying array would be mutated, and the state in each stack
frame would be altered.

28
Santosh Acharya (HND / Third Semester)
Data Structure and Algorithm 2019

{Paths, maxSequence} is equivalent to {paths: paths, maxSequence: maxSequence} : keeps the code
looking clean.

4. Invert the Binary Tree from the earlier problem

29
Santosh Acharya (HND / Third Semester)
Data Structure and Algorithm 2019

Again, the queue is proving to be useful. In order to test this code, I wrote a standard in-order traversal.

Then used the following code to verify that the inversion worked...

In this way, I have implemented a fully ADT Specific program in the program to solve out the given
problem.

Evaluation of ADT complexity:


Binary search algorithm is so popular and faster to solve the problems. Experts has noticed that if all
the names in the world are written down together in order and you want to search for the position of a
specific name, binary search will accomplish this in a maximum of 35 iterations (jaisingh, 2019). The
main principle of this algorithm is that it needed to be sorted at the initial condition and then all the
problem will be solved by the help of the algorithm. It is always seemed that when binary search
algorithm is being performed with the sorted set then the number of iteration to solve the problem will
always be reduced on the basis of the value that is being searched. The complexity of having the binary
search algorithm in the application is that the array must be in the sorted form at the initial condition,
the function must be continuous etc. The performance of binary search algorithm lies within the
division of the array where the middle value will be the root node and then the left part will be one tree
and then right part will be another tree. The search of targeted element will happen at deepest level of
both the tress until the expected value is not being found. At the worst level the binary search will be

30
Santosh Acharya (HND / Third Semester)
Data Structure and Algorithm 2019

at O(log N) and at the best case scenario the time complexity will be O(1). The worst case scenario
happen when the targeted element is not found within the array and best case scenario lies where the
targeted value will be found at midpoint. Binary search algorithm requires three pointers to the
elements, which may be array indices or pointers to memory location, regardless the size of array.
However it required log2(n) bits to encode a pointer to an element of an array with n elements.
Therefore, the space complexity of the binary search algorithm is O(log n). In addition it takes O(n)
space to store the array. Another complexity of the binary search lies within the running out of the
time, cache or the memory. Most of the computer processors stores the memory location that have
been accessed recently, along with the memory location close to it. For example: when an array
element is accessed, the element itself may be stored along with the elements that are stored close to
it in RAM, making it faster to sequentially access array elements that are close in index to each other.
On a sorted array, binary search can jump to distant. Memory locations if the array is large, unlike
algorithms (such as linear search and linear probing in hash tables) which access elements in sequence.
This adds slightly to the running time of binary search for large arrays on most systems.

Asymptotic analysis for effective algorithm:


Algorithm complexity is very important topic in computer science. Knowing the complexity of an
algorithms allows us to answer the questions such as :

i. How long will a program run on input?


ii. How much space it will take?
iii. Is the problem solvable?

These are the important bases of comparison between different algorithms. An understanding of an
algorithmic complexity provides programmers with insight into the efficiency of their code. Also, it is
important to several theoretical areas in computer science, including algorithms data structures and
complexity theory. Thus in general, asymptotic analysis of an algorithm refers to defining the
mathematical foundation/framing of its run-time performance. Using asymptotic analysis, we can very
well conclude the best case, average case, and worst case scenario of an algorithm. Asymptotic analysis
is input bound i.e., if there's no input to the algorithm, it is concluded to work in a constant time. Other
than the "input" all other factors are considered constant. (David, 2019). Suppose we are developing
a program, and for the program the time as well as memory allocation, the usage capacity, the time
taken to execute the program is very important and to measure the execution time and the memory
usage occupied while operating the program, it will be based on the complexity. So if we take any
program which may in different programming language first, algorithms are prepared. While choosing

31
Santosh Acharya (HND / Third Semester)
Data Structure and Algorithm 2019

the algorithms developers choose the best algorithm based on the mentioned things. In most of the
organization where we have to deal with large amount of the data set then asymptotic analysis is being
used. In the application of computer science these techniques are being used in the analysis of the
algorithms, considering the performance of the algorithm when to apply for the larger data set those
are known with the help of the asymptotic analysis. Mathematical example:

The simplest example is a function ƒ (n) = n2+3n, the term 3n becomes insignificant compared to n2
when n is very large. The function "ƒ (n) is said to be asymptotically equivalent to n2 as n → ∞", and
here is written symbolically as ƒ (n) ~ n2.

The annotations used in asymptotic analysis helps to write the fastest and slowest running time for the
application simply means the worst, best and average time limit for the algorithm to complete. The
main concept of asymptotic evaluation is done based on the size of the input that is given in the
algorithm.

Importance of Asymptotic Analysis


It helps in giving the characteristics of algorithm and calculates its efficiency.
Performance of the algorithm can be measured comparing the several algorithms to get the
result of the performance easily.
Best, Worst and Average case scenario can be understood with the help of the asymptotic
analysis.
If there is no input in the algorithm then the algorithm will work in the constant time.
The performance of algorithm is measured in the terms of input size given to that algorithm.

Types of asymptotic annotation:


The main idea of asymptotic analysis is to have a measure of efficiency of algorithms that doesn’t
depend on machine specific constants, and doesn’t require algorithms to be implemented and time
taken by programs to be compared. Asymptotic notations are mathematical tools to represent time
complexity of algorithms for asymptotic analysis. The following 3 asymptotic notations are mostly
used to represent time complexity of algorithms.

32
Santosh Acharya (HND / Third Semester)
Data Structure and Algorithm 2019

1. Big-oh Notation: The Big O notation defines an upper


bound of an algorithm, it bounds a function only from
above. For example, consider the case of Insertion Sort. It
takes linear time in best case and quadratic time in worst
case. We can safely say that the time complexity of
Insertion sort is O(n^2). Note that O(n^2) also covers linear
time. If we use Θ notation to represent time complexity of
Insertion sort, we have to use two statements for best and
worst cases:
1. The worst case time complexity of Insertion Sort is
Θ(n^2).
2. The best case time complexity of Insertion Sort is Θ(n).

The Big O notation is useful when we only have upper bound on time complexity of an algorithm.
Many times we easily find an upper bound by simply looking at the algorithm.

2. Omega Notation: Just as Big O notation provides an


asymptotic upper bound on a function, Ω notation provides
an asymptotic lower bound.
Ω Notation can be useful when we have lower bound on
time complexity of an algorithm. As discussed in the
previous post, the best case performance of an algorithm is
generally not useful, the Omega notation is the least used
notation among all three. For a given function g(n), we
denote by Ω(g(n)) the set of functions.

33
Santosh Acharya (HND / Third Semester)
Data Structure and Algorithm 2019

Let us consider the same Insertion sort example here. The time complexity of Insertion Sort can
be written as Ω(n), but it is not a very useful information about insertion sort, as we are generally
interested in worst case and sometimes in average case.
3. Theta notation
The theta notation bounds a functions from above and
below, so it defines exact asymptotic behavior. A
simple way to get Theta notation of an expression is
to drop low order terms and ignore leading constants.
For example, consider the following expression.
3n3 + 6n2 + 6000 = Θ(n3) Dropping lower order terms
is always fine because there will always be a n0 after
which Θ(n3) has higher values than Θn2) irrespective
of the constants involved.
For a given function g(n), we denote Θ(g(n)) is following set of functions.

The above definition means, if f(n) is theta of g(n), then the value f(n) is always between c1*g(n) and
c2*g(n) for large values of n (n >= n0). The definition of theta also requires that f(n) must be non-
negative for values of n greater than n0.

Factors of Asymptotic Analysis


1. Time Complexity
The time complexity is the amount of time required by an algorithm to execute. It is measured
in terms of number of operations rather than computer time, because computer time is
dependent on the hardware, processor etc.
Some general order that we may consider are: 0(c) < 0(log n ) <0(n log n) etc.
2. Space Complexity
The space complexity of an algorithm is the amount of memory it needs to run to completion.
Space complexity can be defined as: Amount of computer memory required during the program
execution, as the function of input size.
The difference between space complexity and time complexity is that the space can be reused.

34
Santosh Acharya (HND / Third Semester)
Data Structure and Algorithm 2019

For example, let us consider the search problem (searching a given item) in a sorted array. One way to
search is Linear Search (order of growth is linear) and other way is Binary Search (order of growth is
logarithmic). To understand how Asymptotic Analysis solves the above mentioned problems in
analyzing algorithms, let us say we run the Linear Search on a fast computer and Binary Search on a
slow computer. For small values of input array size n, the fast computer may take less time. But, after
certain value of input array size, the Binary Search will definitely start taking less time compared to
the Linear Search even though the Binary Search is being run on a slow machine. The reason is the
order of growth of Binary Search with respect to input size logarithmic while the order of growth of
Linear Search is linear. So the machine dependent constants can always be ignored after certain values
of input size.

For example, we might say "this algorithm takes n2 time," where n is the number of items in the input.
Or we might say "this algorithm takes constant extra space," because the amount of extra memory
needed doesn't vary with the number of items processed.

For both time and space, we are interested in the asymptotic complexity of the algorithm: When n (the
number of items of input) goes to infinity, what happens to the performance of the algorithm.

35
Santosh Acharya (HND / Third Semester)
Data Structure and Algorithm 2019

Effectiveness of Algorithms are depend by measuring


We have three cases to analyze an algorithm:

 Worst Case
 Average Case
 Best Case

Let us consider the following implementation of Linear Search.

36
Santosh Acharya (HND / Third Semester)
Data Structure and Algorithm 2019

Worst Case Analysis (Usually Done)


In the worst case analysis, we calculate upper bound on running time of an algorithm. We must know
the case that causes maximum number of operations to be executed. For Linear Search, the worst case
happens when the element to be searched (x in the above code) is not present in the array. When x is
not present, the search() functions compares it with all the elements of arr[] one by one. Therefore, the
worst case time complexity of linear search would be Θ(n).

Average Case Analysis (Sometimes done)


In average case analysis, we take all possible inputs and calculate computing time for all of the inputs.
Sum all the calculated values and divide the sum by total number of inputs. We must know (or predict)
distribution of cases. For the linear search problem, let us assume that all cases are uniformly
distributed (including the case of x not being present in array). So we sum all the cases and divide the
sum by (n+1). Following is the value of average case time complexity.

Best Case Analysis (Bogus)


In the best case analysis, we calculate lower bound on running time of an algorithm. We must know
the case that causes minimum number of operations to be executed. In the linear search problem, the
best case occurs when x is present at the first location. The number of operations in the best case is
constant (not dependent on n). So time complexity in the best case would be Θ(1) Most of the times,
we do worst case analysis to analyze algorithms. In the worst analysis, we guarantee an upper bound
on the running time of an algorithm which is good information.

The average case analysis is not easy to do in most of the practical cases and it is rarely done. In the
average case analysis, we must know (or predict) the mathematical distribution of all possible inputs.
The Best Case analysis is bogus. Guaranteeing a lower bound on an algorithm doesn’t provide any
information as in the worst case, an algorithm may take years to run.

37
Santosh Acharya (HND / Third Semester)
Data Structure and Algorithm 2019

For some algorithms, all the cases are asymptotically same, i.e., there are no worst and best cases. For
example, Merge Sort. Merge Sort does Θ(nLogn) operations in all cases. Most of the other sorting
algorithms have worst and best cases. For example, in the typical implementation of Quick Sort (where
pivot is chosen as a corner element), the worst occurs when the input array is already sorted and the
best occur when the pivot elements always divide array in two halves. For insertion sort, the worst case
occurs when the array is reverse sorted and the best case occurs when the array is sorted in the same
order as output.

Interpret a trade-off specifying an ADT


Trade off in the terms of algorithm refers to making loose of either time or the space in order to make
the application more optimized and good. Algorithms are supposed to reduce the nature of the work
that is needed to do manually which makes the work more automatic and gives the accurate result to
the users. The choice of having certain algorithm in the application depends on the needs of the
application. But all the programming algorithm will not provide the accurate amount of time and space
complexity while doing programming. Those selection of the algorithm depends on the needs of the
application. It is the best algorithm, hence best program to solve a given problem is one that requires
less space in memory and takes less time to execute its instruction or to generate output. But in practice,
it is not always possible to achieve both of these objectives. As said earlier, there may be more than
one approaches to solve a same problem. One such approach may require more space but takes less
time to complete its execution. Thus we may have to sacrifice one at the cost of the other. That is what
we can say that there exists a time space trade off among algorithms.

Therefore, if space is our constraints then we have to choose a program that requires less space at the
cost of more execution time. Other than that, if time is our constraint, then we have to choose a program
that takes less time to complete its execution of statements at the cost of more space.

Abstract Data type (ADT) is a type (or class) for objects whose behavior is defined by a set of value
and a set of operations. The definition of ADT only mentions what operations are to be performed but
not how these operations will be implemented. It does not specify how data will be organized in
memory and what algorithms will be used for implementing the operations. It is called “abstract”
because it gives an implementation-independent view. The process of providing only the essentials
and hiding the details is known as abstraction. For example:

38
Santosh Acharya (HND / Third Semester)
Data Structure and Algorithm 2019

The List type of Java is abstract.

Now, Abstract data types introduce an abstraction barrier between those who implement a data type
and those who use it. If you are implementing the data type, then you know how its values are
represented and are allowed to write code that depends upon that representation. If you are a user of
an abstract data type, then you do not know how its values are represented and you are not allowed to
write code that depends upon the representation. For example let us consider I implement the queue
ADT where my requirements were satisfied by simply using an array of a fixed size but if someone
else uses the same queue ADT and does not take into account the elements that are being added to the
queue then it may result in overflow causing array out of bound exception. This leads to a fundamental
tradeoff which characterizes abstract data types: by making the implementation of the data opaque, we
have gained modularity at the expense of extensibility.

In the analysis of algorithms, we are interested in the average case, the amount of time a program might
be expected to take on typical input data and in the worst case the total time required by the program
or the algorithm would take on the worst possible inputs of that algorithm. There are different types of
trade-off in algorithms which are:

1. Lookup tables Vs Recalculation: An algorithm involving a lookup table is an implementation


can include the entire table, which reduces computing time, but increases the amount of memory
needed, or it can compute table entries as needed, increasing computing time, but reducing memory
requirements.
2. Compressed Vs Un compressed data: Problem of data storage can also be handling by using
space and time tradeoff of algorithms. If data is stored is not compressed, it takes more space but
access takes less time than if the data were stored compressed (since compressing the data reduces
the amount of space it takes, but it takes time to run the decompression algorithm). It is depending
upon the particular instance of the problem, either way is practical. There are also rare instances
where it is possible to directly work with compressed data, such as in the case of compressed
bitmap indices, where it is faster to work with compression than without compression.
3. Re Rendering Vs Stored images: In this case storing only the SVG source of a vector image and
rendering it as a bitmap image every time the page is requested would be trading time for space;
more time used, but less space. Rendering the image when the page is changed and storing the
rendered images would be trading space for time; more space used, but less time. This technique
is more generally known as caching.

39
Santosh Acharya (HND / Third Semester)
Data Structure and Algorithm 2019

4. Smaller code Vs loop unrolling: This technique is commonly used to makes the code longer for
each iteration of a loop, but it saves the computation time required for jumping back to the
beginning of the loop at the end of each iteration. Larger code size can be traded for higher program
speed when applying loop unrolling.

Abstract data types (Implementation independent data structures) offer several advantages over
concrete data structures:
1. Representation Independence: Most of the program becomes independent of the abstract data
type’s representation, so that representation can be improved without breaking the entire program.
2. Modularity: With representation independence, the different parts of a program become less
dependent on other parts and on how those other parts are implemented.
3. Interchangeability of Parts: Different implementations of an abstract data type may have
different performance characteristics. With abstract data types, it becomes easier for each part of a
program to use an implementation of its data types that will be more efficient for that particular
part of the program.
4. As the definition of ADT we do not need to worry about the implementation details of the
algorithms and functionality. So that this advantages of ADT helps in reducing the complexity of
understanding different programming tasks that might be needed.
5. Errors of ADT cannot be caused by codes that uses the types which refers that if there is any errors
then there will be only errors belonging to the implementation or representation of the data types.
6. If any changes are needed to the data types referring to the ADT then you just have to change the
implementation of the ADT and will not be worry about the coding terms.
7. ADT adopts the principles of OOP so they are reusable robust and features like encapsulation helps
in making the data more secured.
8. ADT help in representing different parts of the program independently which leads to the features
like less dependent (Modularity) on each other’s, interchangeable of the codes that represents the
whole projects.

Example: Java’s standard libraries supply several different implementations of its Map data type. The
TreeMap implementation might be more efficient when a total ordering on the keys can be computed
quickly but a good hash value is hard to compute efficiently. The HashMap implementation might be
more efficient when hash values can be computed quickly and there is no obvious ordering on keys.
The part of a program that creates a Map can decide which implementation to use. The parts of a
program that deal with a created Map don’t have to know how it was implemented; once created, it’s

40
Santosh Acharya (HND / Third Semester)
Data Structure and Algorithm 2019

just a Map. If it weren’t for abstract data types, every part of the program that uses a Map would have
to be written twice, with one version to deal with TreeMap implementations and another version to
deal with HashMap implementations.

Conclusion
Abstract data types ADT are the class of the objects whose logical behavior is defined by the set of the
values and a set of the operation. The implementation of the ADT helps in knowing about the
implementation of the algorithm and not to be worried about the programming concepts that are being
used. ADT are the theoretical concepts in the computer sciences which are not supported by most of
the computer languages. They completely helps in the design and analysis of the algorithms, data
structures and software systems and do not corresponds to the specific feature of the computer
programming language. There are many of the complex ADT data types some of them are queue, stack
etc. which can be understood through the simple operations like push, insert of the elements within the
data types. ADT helps in the creation of the modules that allows to utilize a better and more efficient
problem solving process. Those models helps in describing the data that our algorithm will manipulate
in a much more continent way with respect to the problem itself. Regarding the implementation of the
ADT they helps in logically describing how to view the data and the operations that are necessary.
Referring to the different types of ADT there are complexity of each and every data types that are
being used. For example: For fixed size array, the time complexity is O(1) for both the push and pop
operations as you only have to move the last pointer left or right. For dynamically resize-able arrays,
the amortized time complexity for both the push and pop operation is O(1).

41
Santosh Acharya (HND / Third Semester)
Data Structure and Algorithm 2019

References
Chauhan, A., 2016. GreeksforGreeks. [Online]
Available at: https://www.geeksforgeeks.org/abstract-data-types/
[Accessed 6 February 2019].

David, K., 2019. Algorithms for Better Programming. IT Tech Reports Weekly, IV(15), pp. 45-48.

Dought, Y. G., 2017. Error Handling in Java. Programming Paradigm, II(12), pp. 40-50.

J.Bern, 2017. IntractivePython. [Online]


Available at:
http://interactivepython.org/runestone/static/pythonds/Introduction/WhyStudyDataStructuresandAbst
ractDataTypes.html
[Accessed 14 April 2019].

Jack, C., 2018. Study Tonight. [Online]


Available at: https://www.studytonight.com/data-structures/quick-sort
[Accessed 7 April 2019].

Jorge, F., 2016. Tech Differences. [Online]


Available at: https://techdifferences.com/difference-between-bubble-sort-and-selection-sort.html
[Accessed 6 April 2019].

Jorge, M., 2018. Study Tonight. [Online]


Available at: https://www.studytonight.com/data-structures/introduction-to-data-structures
[Accessed 14 April 2019].

Marshal, J., 2015. Brilliant.org. [Online]


Available at: https://brilliant.org/wiki/bellman-ford-algorithm/
[Accessed 10 April 2019].

Michel, R., 2018. Machine Learning and Computing World. [Online]


Available at: https://study.com/academy/lesson/sorting-algorithm-comparison-strengths-
weaknesses.html
[Accessed 5 April 2019].

42
Santosh Acharya (HND / Third Semester)
Data Structure and Algorithm 2019

43
Santosh Acharya (HND / Third Semester)

You might also like