0% found this document useful (0 votes)
2 views

stack data structures

Data structures are methods for organizing and storing data efficiently in memory, with types classified as primitive and non-primitive. Major operations include searching, sorting, insertion, updating, and deletion, while abstract data types (ADTs) provide a level of abstraction for data structures. Asymptotic notations like Big O, Omega, and Theta are used to analyze algorithm performance in terms of time complexity.
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

stack data structures

Data structures are methods for organizing and storing data efficiently in memory, with types classified as primitive and non-primitive. Major operations include searching, sorting, insertion, updating, and deletion, while abstract data types (ADTs) provide a level of abstraction for data structures. Asymptotic notations like Big O, Omega, and Theta are used to analyze algorithm performance in terms of time complexity.
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 30

Data Structure is a way to store and organize data so that it can be used efficiently

What is Data Structure?

The data structure name indicates itself that organizing the data in memory. There are many
ways of organizing the data in the memory as we have already seen one of the data structures,
i.e., array in C language. Array is a collection of memory elements in which data is stored
sequentially, i.e., one after another. In other words, we can say that array stores the elements in a
continuous manner. This organization of data is done with the help of an array of data structures.
There are also other ways to organize the data in memory. Let's see the different types of data
structures.

The data structure is not any programming language like C, C++, java, etc. It is a set of
algorithms that we can use in any programming language to structure the data in the memory.

Types of Data Structures

There are two types of data structures:

Primitive data structure


Non-primitive data structure

Primitive Data structure

The primitive data structures are primitive data types. The int, char, float, double, and pointer are
the primitive data structures that can hold a single value.

Non-Primitive Data structure

The non-primitive data structure is divided into two types:

Linear data structure

Non-linear data structure

The arrangement of data in a sequential manner is known as a linear data structure. The data
structures used for this purpose are Arrays, Linked list, Stacks, and Queues. In these data
structures, one element is connected to only one another element in a linear form.

When one element is connected to the 'n' number of elements known as a non-linear data
structure. The best example is trees and graphs. In this case, the elements are arranged in a
random manner.

Data structures can also be classified as:


Static data structure: It is a type of data structure where the size is allocated at the compile time.
Therefore, the maximum size is fixed.

Dynamic data structure: It is a type of data structure where the size is allocated at the run time.
Therefore, the maximum size is flexible.

Major Operations

The major or the common operations that can be performed on the data structures are:

Searching: We can search for any element in a data structure.

Sorting: We can sort the elements of a data structure either in an ascending or descending order.

Insertion: We can also insert the new element in a data structure.

Updation: We can also update the element, i.e., we can replace the element with another element.

Deletion: We can also perform the delete operation to remove the element from the data
structure.

Advantages of Data Structures

Efficiency: Efficiency of a program depends upon the choice of data structures. For example:
suppose, we have some data and we need to perform the search for a particular record. In that
case, if we organize our data in an array, we will have to search sequentially element by element.
Hence, using array may not be very efficient here. There are better data structures which can
make the search process efficient like ordered array, binary search tree or hash tables.

Reusability: Data structures are reusable, i.e. once we have implemented a particular data
structure, we can use it at any other place. Implementation of data structures can be compiled
into libraries which can be used by different clients.
Abstraction: Data structure is specified by the ADT which provides a level of abstraction. The
client program uses the data structure through interface only, without getting into the
implementation details.

Abstract Data type (ADT) is a type (or class) for objects whose behaviour is defined by a set of
value and a set of operations.

The definition of ADT only mentions what operations are to be performed but not how these
operations will be implemented. It does not specify how data will be organized in memory and
what algorithms will be used for implementing the operations. It is called “abstract” because it
gives an implementation-independent view. The process of providing only the essentials and
hiding the details is known as abstraction.

The user of data type does not need to know how that data type is implemented, for example, we
have been using Primitive values like int, float, char data types only with the knowledge that
these data type can operate and be performed on without any idea of how they are implemented.
So a user only needs to know what a data type can do, but not how it will be implemented. Think
of ADT as a black box which hides the inner structure and design of the data type. Now we’ll
define three ADTs namely List ADT, Stack ADT, Queue ADT.

A list contains elements of the same type arranged in sequential order and following operations
can be performed on the list.

get() – Return an element from the list at any given position.

insert() – Insert an element at any position of the list.

remove() – Remove the first occurrence of any element from a non-empty list.

removeAt() – Remove the element at a specified location from a non-empty list.

replace() – Replace an element at any position by another element.

size() – Return the number of elements in the list.

isEmpty() – Return true if the list is empty, otherwise return false.

isFull() – Return true if the list is full, otherwise return false.

A Stack contains elements of the same type arranged in sequential order. All operations take
place at a single end that is top of the stack and following operations can be performed:

push() – Insert an element at one end of the stack called top.

pop() – Remove and return the element at the top of the stack, if it is not empty.

peek() – Return the element at the top of the stack without removing it, if the stack is not empty.

size() – Return the number of elements in the stack.

isEmpty() – Return true if the stack is empty, otherwise return false.


isFull() – Return true if the stack is full, otherwise return false.

A Queue contains elements of the same type arranged in sequential order. Operations take place
at both ends, insertion is done at the end and deletion is done at the front. Following operations
can be performed:

enqueue() – Insert an element at the end of the queue.

dequeue() – Remove and return the first element of the queue, if the queue is not empty.

peek() – Return the element of the queue without removing it, if the queue is not empty.

size() – Return the number of elements in the queue.

isEmpty() – Return true if the queue is empty, otherwise return false.

isFull() – Return true if the queue is full, otherwise return false.

From these definitions, we can clearly see that the definitions do not specify how these ADTs
will be represented and how the operations will be carried out. There can be different ways to
implement an ADT, for example, the List ADT can be implemented using arrays, or singly
linked list or doubly linked list. Similarly, stack ADT and Queue ADT can be implemented
using arrays or linked lists.

Usually, the time required by an algorithm comes under three types:

Worst case: It defines the input for which the algorithm takes a huge time.

Average case: It takes average time for the program execution.

Best case: It defines the input for which the algorithm takes the lowest time
Asymptotic Notations

The commonly used asymptotic notations used for calculating the running time complexity of
an algorithm is given below:

Big oh Notation (O)

Omega Notation (Ω)

Theta Notation (θ)

Big oh Notation (O)

Big O notation is an asymptotic notation that measures the performance of an algorithm by


simply providing the order of growth of the function.

This notation provides an upper bound on a function which ensures that the function never
grows faster than the upper bound. So, it gives the least upper bound on a function so that the
function never grows faster than this upper bound.

It is the formal way to express the upper boundary of an algorithm running time. It measures
the worst case of time complexity or the algorithm's longest amount of time to complete its
operation. It is represented as shown below:

Asymptotic Analysis

For example:
If f(n) and g(n) are the two functions defined for positive integers,

then f(n) = O(g(n)) as f(n) is big oh of g(n) or f(n) is on the order of g(n)) if there exists
constants c and no such that:

f(n)≤c.g(n) for all n≥no

This implies that f(n) does not grow faster than g(n), or g(n) is an upper bound on the function
f(n). In this case, we are calculating the growth rate of the function which eventually calculates
the worst time complexity of a function, i.e., how worst an algorithm can perform.

Let's understand through examples

Example 1: f(n)=2n+3 , g(n)=n

Now, we have to find Is f(n)=O(g(n))?

To check f(n)=O(g(n)), it must satisfy the given condition:

f(n)<=c.g(n)

First, we will replace f(n) by 2n+3 and g(n) by n.

2n+3 <= c.n

Let's assume c=5, n=1 then

2*1+3<=5*1

5<=5

For n=1, the above condition is true.

If n=2
2*2+3<=5*2

7<=10

For n=2, the above condition is true.

We know that for any value of n, it will satisfy the above condition, i.e., 2n+3<=c.n. If the
value of c is equal to 5, then it will satisfy the condition 2n+3<=c.n. We can take any value of
n starting from 1, it will always satisfy. Therefore, we can say that for some constants c and for
some constants n0, it will always satisfy 2n+3<=c.n. As it is satisfying the above condition, so
f(n) is big oh of g(n) or we can say that f(n) grows linearly. Therefore, it concludes that c.g(n)
is the upper bound of the f(n). It can be represented graphically as:

Asymptotic Analysis

The idea of using big o notation is to give an upper bound of a particular function, and
eventually it leads to give a worst-time complexity. It provides an assurance that a particular
function does not behave suddenly as a quadratic or a cubic fashion, it just behaves in a linear
manner in a worst-case.

Omega Notation (Ω)

It basically describes the best-case scenario which is opposite to the big o notation.

It is the formal way to represent the lower bound of an algorithm's running time. It measures
the best amount of time an algorithm can possibly take to complete or the best-case time
complexity.

It determines what is the fastest time that an algorithm can run.

If we required that an algorithm takes at least certain amount of time without using an upper
bound, we use big- Ω notation i.e. the Greek letter "omega". It is used to bound the growth of
running time for large input size.
If f(n) and g(n) are the two functions defined for positive integers,

then f(n) = Ω (g(n)) as f(n) is Omega of g(n) or f(n) is on the order of g(n)) if there exists
constants c and no such that:

f(n)>=c.g(n) for all n≥no and c>0

Let's consider a simple example.

If f(n) = 2n+3, g(n) = n,

Is f(n)= Ω (g(n))?

It must satisfy the condition:

f(n)>=c.g(n)

To check the above condition, we first replace f(n) by 2n+3 and g(n) by n.

2n+3>=c*n

Suppose c=1

2n+3>=n (This equation will be true for any value of n starting from 1).

Therefore, it is proved that g(n) is big omega of 2n+3 function.


Asymptotic Analysis

As we can see in the above figure that g(n) function is the lower bound of the f(n) function
when the value of c is equal to 1. Therefore, this notation gives the fastest running time. But,
we are not more interested in finding the fastest running time, we are interested in calculating
the worst-case scenarios because we want to check our algorithm for larger input that what is
the worst time that it will take so that we can take further decision in the further process.

Theta Notation (θ)

The theta notation mainly describes the average case scenarios.

It represents the realistic time complexity of an algorithm. Every time, an algorithm does not
perform worst or best, in real-world problems, algorithms mainly fluctuate between the worst-
case and best-case, and this gives us the average case of the algorithm.

Big theta is mainly used when the value of worst-case and the best-case is same.
It is the formal way to express both the upper bound and lower bound of an algorithm running
time.

Let's understand the big theta notation mathematically:

Let f(n) and g(n) be the functions of n where n is the steps required to execute the program
then:

f(n)= θg(n)

The above condition is satisfied only if when

c1.g(n)<=f(n)<=c2.g(n)

where the function is bounded by two limits, i.e., upper and lower limit, and f(n) comes in
between. The condition f(n)= θg(n) will be true if and only if c1.g(n) is less than or equal to
f(n) and c2.g(n) is greater than or equal to f(n). The graphical representation of theta notation is
given below:

Asymptotic Analysis
Let's consider the same example where

f(n)=2n+3

g(n)=n

As c1.g(n) should be less than f(n) so c1 has to be 1 whereas c2.g(n) should be greater than f(n)
so c2 is equal to 5. The c1.g(n) is the lower limit of the of the f(n) while c2.g(n) is the upper
limit of the f(n).

c1.g(n)<=f(n)<=c2.g(n)

Replace g(n) by n and f(n) by 2n+3

c1.n <=2n+3<=c2.n

if c1=1, c2=2, n=1

1*1 <=2*1+3 <=2*1

1 <= 5 <= 2 // for n=1, it satisfies the condition c1.g(n)<=f(n)<=c2.g(n)

If n=2

1*2<=2*2+3<=2*2

2<=7<=4 // for n=2, it satisfies the condition c1.g(n)<=f(n)<=c2.g(n)

Therefore, we can say that for any value of n, it satisfies the condition c1.g(n)<=f(n)<=c2.g(n).
Hence, it is proved that f(n) is big theta of g(n). So, this is the average-case scenario which
provides the realistic time complexity.

Why we have three different asymptotic analysis?

As we know that big omega is for the best case, big oh is for the worst case while big theta is
for the average case. Now, we will find out the average, worst and the best case of the linear
search algorithm.

Suppose we have an array of n numbers, and we want to find the particular element in an array
using the linear search. In the linear search, every element is compared with the searched
element on each iteration. Suppose, if the match is found in a first iteration only, then the best
case would be Ω(1), if the element matches with the last element, i.e., nth element of the array
then the worst case would be O(n). The average case is the mid of the best and the worst-case,
so it becomes θ(n/1). The constant terms can be ignored in the time complexity so average case
would be θ(n).

So, three different analysis provide the proper bounding between the actual functions. Here,
bounding means that we have upper as well as lower limit which assures that the algorithm will
behave between these limits only, i.e., it will not go beyond these limits.

A tradeoff is a situation where one thing increases and another thing decreases. It is a way to
solve a problem in:

Either in less time and by using more space, or

In very little space by spending a long amount of time.

The best Algorithm is that which helps to solve a problem that requires less space in memory
and also takes less time to generate the output. But in general, it is not always possible to
achieve both of these conditions at the same time. The most common condition is an algorithm
using a lookup table. This means that the answers to some questions for every possible value
can be written down. One way of solving this problem is to write down the entire lookup table,
which will let you find answers very quickly but will use a lot of space. Another way is to
calculate the answers without writing down anything, which uses very little space, but might
take a long time. Therefore, the more time-efficient algorithms you have, that would be less
space-efficient.
Types of Space-Time Trade-off

Compressed or Uncompressed data

Re Rendering or Stored images

Smaller code or loop unrolling

Lookup tables or Recalculation

Compressed or Uncompressed data: A space-time trade-off can be applied to the problem of


data storage. If data stored is uncompressed, it takes more space but less time. But if the data is
stored compressed, it takes less space but more time to run the decompression algorithm. There
are many instances where it is possible to directly work with compressed data. In that case of
compressed bitmap indices, where it is faster to work with compression than without
compression.

Re-Rendering or Stored images: In this case, storing only the source and rendering it as an
image would take more space but less time i.e., storing an image in the cache is faster than re-
rendering but requires more space in memory.

Smaller code or Loop Unrolling: Smaller code occupies less space in memory but it requires
high computation time that is required for jumping back to the beginning of the loop at the end
of each iteration. Loop unrolling can optimize execution speed at the cost of increased binary
size. It occupies more space in memory but requires less computation time.

Lookup tables or Recalculation: In a lookup table, an implementation can include the entire
table which reduces computing time but increases the amount of memory needed. It can
recalculate i.e., compute table entries as needed, increasing computing time but reducing
memory requirements.

For Example: In mathematical terms, the sequence Fn of the Fibonacci Numbers is defined by
the recurrence relation:

Fn = Fn – 1 + Fn – 2,

where, F0 = 0 and F1 = 1.

A simple solution to find the Nth Fibonacci term using recursion from the above recurrence
relation.

Searching

Searching is the process of finding some particular element in the list. If the element is present
in the list, then the process is called successful and the process returns the location of that
element, otherwise the search is called unsuccessful.

There are two popular search methods that are widely used in order to search some item into
the list. However, choice of the algorithm depends upon the arrangement of the list.

Linear Search

Binary Search
Linear Search

Linear search is the simplest search algorithm and often called sequential search. In this type of
searching, we simply traverse the list completely and match each element of the list with the
item whose location is to be found. If the match found then location of the item is returned
otherwise the algorithm return NULL.

Linear search is mostly used to search an unordered list in which the items are not
sorted. The algorithm of linear search is given as follows.

Algorithm
o LINEAR_SEARCH(A, N, VAL)

o Step 1: [INITIALIZE] SET POS = -1

o Step 2: [INITIALIZE] SET I = 1

o Step 3: Repeat Step 4 while I<=N

o Step 4: IF A[I] = VAL


SET POS = I
PRINT POS
Go to Step 6
[END OF IF]
SET I = I + 1
[END OF LOOP]

o Step 5: IF POS = -1
PRINT " VALUE IS NOT PRESENTIN THE ARRAY "
[END OF IF]

o Step 6: EXIT

Complexity of algorithm

Complexity Best Case Average Case Worst Case

Time O(1) O(n) O(n)

Binary Search
Binary search is the search technique which works efficiently on the sorted lists. Hence, in
order to search an element into some list by using binary search technique, we must ensure that
the list is sorted.

Binary search follows divide and conquer approach in which, the list is divided into two halves
and the item is compared with the middle element of the list. If the match is found then, the
location of middle element is returned otherwise, we search into either of the halves depending
upon the result produced through the match.

Binary search algorithm is given below.

BINARY_SEARCH(A, lower_bound, upper_bound, VAL)

Step 1: [INITIALIZE] SET BEG = lower_bound

END = upper_bound, POS = - 1

Step 2: Repeat Steps 3 and 4 while BEG <=END

Step 3: SET MID = (BEG + END)/2

Step 4: IF A[MID] = VAL

SET POS = MID

PRINT POS

Go to Step 6

ELSE IF A[MID] > VAL


SET END = MID - 1

ELSE

SET BEG = MID + 1

[END OF IF]

[END OF LOOP]

Step 5: IF POS = -1

PRINT "VALUE IS NOT PRESENT IN THE ARRAY"

[END OF IF]

Step 6: EXIT

Complexity

SN Performance Complexity

1 Worst case O(log n)

2 Best case O(1)

3 Average Case O(log n)

Stack

What is a Stack?
A Stack is a linear data structure that follows the LIFO (Last-In-First-Out) principle.
Stack has one end, whereas the Queue has two ends (front and rear). It contains only
one pointer top pointer pointing to the topmost element of the stack. Whenever an
element is added in the stack, it is added on the top of the stack, and the element can
be deleted only from the stack. In other words, a stack can be defined as a
container in which insertion and deletion can be done from the one end
known as the top of the stack.
Some key points related to stack
o It is called as stack because it behaves like a real-world stack, piles of books, etc.

o A Stack is an abstract data type with a pre-defined capacity, which means that it
can store the elements of a limited size.

o It is a data structure that follows some order to insert and delete the elements,
and that order can be LIFO or FILO.

Working of Stack
Stack works on the LIFO pattern. As we can observe in the below figure there are five
memory blocks in the stack; therefore, the size of the stack is 5.

Suppose we want to store the elements in a stack and let's assume that stack is empty.
We have taken the stack of size 5 as shown below in which we are pushing the
elements one by one until the stack becomes full.

Since our stack is full as the size of the stack is 5. In the above cases, we can observe
that it goes from the top to the bottom when we were entering the new element in the
stack. The stack gets filled up from the bottom to the top.

When we perform the delete operation on the stack, there is only one way for entry and
exit as the other end is closed. It follows the LIFO pattern, which means that the value
entered first will be removed last. In the above case, the value 5 is entered first, so it
will be removed only after the deletion of all the other elements.
Standard Stack Operations
The following are some common operations implemented on the stack:

o push(): When we insert an element in a stack then the operation is known as a


push. If the stack is full then the overflow condition occurs.

o pop(): When we delete an element from the stack, the operation is known as a
pop. If the stack is empty means that no element exists in the stack, this state is
known as an underflow state.

o isEmpty(): It determines whether the stack is empty or not.

o isFull(): It determines whether the stack is full or not.'

o peek(): It returns the element at the given position.

o count(): It returns the total number of elements available in a stack.

o change(): It changes the element at the given position.

o display(): It prints all the elements available in the stack.

PUSH operation
The steps involved in the PUSH operation is given below:

o Before inserting an element in a stack, we check whether the stack is full.

o If we try to insert the element in a stack, and the stack is full, then
the overflow condition occurs.

o When we initialize a stack, we set the value of top as -1 to check that the stack is
empty.

o When the new element is pushed in a stack, first, the value of the top gets
incremented, i.e., top=top+1, and the element will be placed at the new
position of the top.

o The elements will be inserted until we reach the max size of the stack.
POP operation
The steps involved in the POP operation is given below:

o Before deleting the element from the stack, we check whether the stack is
empty.

o If we try to delete the element from the empty stack, then


the underflow condition occurs.

o If the stack is not empty, we first access the element which is pointed by the top

o Once the pop operation is performed, the top is decremented by 1,


i.e., top=top-1.
Applications of Stack
The following are the applications of the stack:

o Balancing of symbols: Stack is used for balancing a symbol. For example, we


have the following program:

As we know, each program has an opening and closing braces; when the opening
braces come, we push the braces in a stack, and when the closing braces appear, we
pop the opening braces from the stack. Therefore, the net value comes out to be zero. If
any symbol is left in the stack, it means that some syntax occurs in a program.

o String reversal: Stack is also used for reversing a string. For example, we want
to reverse a "VNR VJIET" string, so we can achieve this with the help of a stack.
First, we push all the characters of the string in a stack until we reach the null
character.
After pushing all the characters, we start taking out the character one by one
until we reach the bottom of the stack.

o Recursion: The recursion means that the function is calling itself again. To
maintain the previous states, the compiler creates a system stack in which all the
previous records of the function are maintained.
o DFS(Depth First Search): This search is implemented on a Graph, and Graph
uses the stack data structure.

o Backtracking: Suppose we have to create a path to solve a maze problem. If we


are moving in a particular path, and we realize that we come on the wrong way.
In order to come at the beginning of the path to create a new path, we have to
use the stack data structure.

o Expression conversion: Stack can also be used for expression conversion. This
is one of the most important applications of stack. The list of the expression
conversion is given below:

Infix to postfix
Infix to Prefix Conversion
Postfix evaluation

o Memory management: The stack manages the memory. The memory is


assigned in the contiguous memory blocks. The memory is known as stack
memory as all the variables are assigned in a function call stack memory. The
memory size assigned to the program is known to the compiler. When the
function is created, all its variables are assigned in the stack memory. When the
function completed its execution, all the variables assigned in the stack are
released.

Infix to Postfix expression

Infix expression
An infix expression is an expression in which operators (+, -, *, /) are written between
the two operands. For example, consider the following expressions:

1. A + B
2. A + B - C
3. (A + B) + (C - D)

Here we have written '+' operator between the operands A and B, and the - operator in
between the C and D operand.
Postfix Expression
The postfix operator also contains operator and operands. In the postfix expression, the
operator is written after the operand. It is also known as Reverse Polish Notation. For
example, consider the following expressions:

1. A B +
2. A B + C -
3. A B C * +
4. A B + C * D -
Algorithm to Convert Infix to Postfix Expression Using Stack
Following is the algorithm to convert infix expression into Reverse Polish notation.

1. Initialize the Stack.

2. Scan the operator from left to right in the infix expression.

3. If the leftmost character is an operand, set it as the current output to the Postfix
string.

4. And if the scanned character is the operator and the Stack is empty or contains
the '(', ')' symbol, push the operator into the Stack.

5. If the scanned operator has higher precedence than the


existing precedence operator in the Stack or if the Stack is empty, put it on the
Stack.

6. If the scanned operator has lower precedence than the existing operator in the
Stack, pop all the Stack operators. After that, push the scanned operator into the
Stack.

7. If the scanned character is a left bracket '(', push it into the Stack.

8. If we encountered right bracket ')', pop the Stack and print all output string
character until '(' is encountered and discard both the bracket.

9. Repeat all steps from 2 to 8 until the infix expression is scanned.

10. Print the Stack output.

11. Pop and output all characters, including the operator, from the Stack until it is
not empty.

Let's translate an infix expression into postfix expression in the stack:


Here, we have infix expression (( A * (B + D)/E) - F * (G + H / K))) to convert into its
equivalent postfix expression:

Label No. Symbol Scanned Stack Expression

1 ( (

2 ( ((

3 A (( A

4 * ((* A

5 ( ((*( A

6 B ((*( AB

7 + ((*(+ AB

8 D ((*(+ ABD

9 ) ((* ABD+

10 / ((*/ ABD+

11 E ((*/ ABD+E

12 ) ( ABD+E/*

13 - (- ABD+E/*

14 ( (-( ABD+E/*
15 F (-( ABD+E/*F

16 * (-(* ABD+E/*F

17 ( (-(*( ABD+E/*F

18 G (-(*( ABD+E/*FG

19 + (-(*(+ ABD+E/*FG

20 H (-(*(+ ABD+E/*FGH

21 / (-(*(+/ ABD+E/*FGH

22 K (-(*(+/ ABD+E/*FGHK

23 ) (-(* ABD+E/*FGHK/+

24 ) (- ABD+E/*FGHK/+*

25 ) ABD+E/*FGHK/+*-

The Postfix notation is used to represent algebraic expressions. The expressions written in
postfix form are evaluated faster compared to infix notation as parenthesis are not required in
postfix. In this, evaluation of postfix expressions is discussed.

Following is algorithm for evaluation postfix expressions.


1) Create a stack to store operands (or values).

2) Scan the given expression and do following for every scanned element.

…..a) If the element is a number, push it into the stack

…..b) If the element is a operator, pop operands for the operator from stack. Evaluate the
operator and push the result back to the stack

3) When the expression is ended, the number in the stack is the final answer

Example:

Let the given expression be “2 3 1 * + 9 -“. We scan all elements one by one.

1) Scan ‘2’, it’s a number, so push it to stack. Stack contains ‘2’

2) Scan ‘3’, again a number, push it to stack, stack now contains ‘2 3’ (from bottom to top)

3) Scan ‘1’, again a number, push it to stack, stack now contains ‘2 3 1’

4) Scan ‘*’, it’s an operator, pop two operands from stack, apply the * operator on operands,
we get 3*1 which results in 3. We push the result ‘3’ to stack. Stack now becomes ‘2 3’.

5) Scan ‘+’, it’s an operator, pop two operands from stack, apply the + operator on operands,
we get 3 + 2 which results in 5. We push the result ‘5’ to stack. Stack now becomes ‘5’.

6) Scan ‘9’, it’s a number, we push it to the stack. Stack now becomes ‘5 9’.

7) Scan ‘-‘, it’s an operator, pop two operands from stack, apply the – operator on operands,
we get 5 – 9 which results in -4. We push the result ‘-4’ to stack. Stack now becomes ‘-4’.

8) There are no more elements to scan, we return the top element from stack (which is the only
element left in stack).
Tower of Hanoi is a mathematical puzzle where we have three rods and n
disks. The objective of the puzzle is to move the entire stack to another rod,
obeying the following simple rules:
1. Only one disk can be moved at a time.
2. Each move consists of taking the upper disk from one of the stacks and
placing it on top of another stack i.e. a disk can only be moved if it is the
uppermost disk on a stack.
3. No disk may be placed on top of a smaller disk.
Approach :

Take an example for 2 disks :


Let rod 1 = 'A', rod 2 = 'B', rod 3 = 'C'.

Step 1 : Shift first disk from 'A' to 'B'.


Step 2 : Shift second disk from 'A' to 'C'.
Step 3 : Shift first disk from 'B' to 'C'.

The pattern here is :


Shift 'n-1' disks from 'A' to 'B'.
Shift last disk from 'A' to 'C'.
Shift 'n-1' disks from 'B' to 'C'.

Image illustration for 3 disks :


Examples:

Input : 2
Output : Disk 1 moved from A to B
Disk 2 moved from A to C
Disk 1 moved from B to C

Input : 3
Output : Disk 1 moved from A to C
Disk 2 moved from A to B
Disk 1 moved from C to B
Disk 3 moved from A to C
Disk 1 moved from B to A
Disk 2 moved from B to C
Disk 1 moved from A to C

ALL Programs done in the lab and recursion concept to be read

You might also like