Data Stucture Module 1
Data Stucture Module 1
MODULE 1
Algorithm
Algorithm is a step-by-step finite sequence of instruction, to solve a well-
defined computational problem.
That is, in practice to solve any complex real life problems; first we have
to define the problems. Second step is to design the algorithm to solve that
problem.
Writing and executing programs and then optimizing them may be
effective for small programs. Optimization of a program is directly concerned
with algorithm design. But for a large program, each part of the program must
be well organized before writing the program. There are few steps of refinement
involved when a problem is converted to program; this method is called
stepwise refinement method. There are two approaches for algorithm design;
they are top-down and bottom-up algorithm design.
Programming style
Following sections will discuss different programming methodologies to design
a program.
1. Procedural
2. Modular
3. Structured
4. Object oriented
1. Procedural Programming
2. Modular Programming
The program is progressively decomposed into smaller partition called
modules. The program can easily be written in modular form, thus allowing an
overall problem to be decomposed into a sequence of individual sub programs.
Thus we can consider, a module decomposed into successively subordinate
module. Conversely, a number of modules can combined together to form a
superior module.
A sub-module, are located elsewhere in the program and the superior
module, whenever necessary make a reference to subordinate module and call
for its execution. This activity on part of the superior module is known as a
calling, and this module is referred as calling module, and sub module referred
as called module. The sub module may be subprograms such as function or
procedures.
The following are the steps needed to develop a modular program
1. Define the problem
2. Outline the steps needed to achieve the goal
3. Decompose the problem into subtasks
4. Prototype a subprogram for each sub task
5. Repeat step 3 and 4 for each subprogram until further decomposition
seems counter productive
Two methods may be used for modular programming. They are known as
top-down and bottom-up. Regardless of whether the top-down or bottom-up
method is used, the end result is a modular program. This end result is
important, because not all errors may be detected at the time of the initial
testing. It is possible that there are still bugs in the program. If an error is
discovered after the program supposedly has been fully tested, then the
modules concerned can be isolated and retested by them. Regardless of the
design method used, if a program has been written in modular form, it is easier
to detect the source of the error and to test it in isolation, than if the program
were written as one function.
Advantages of modular programming
1. Reduce the complexity of the entire problem
2. Avoid the duplication of code
3. debugging program is easier and reliable
4. Improves the performance
5. Modular program hides the use of data structure
6. Global data also hidden in module
7. Reusability- modules can be used in other program without rewriting
and retesting
8. Modular program improves the portability of program
9. It reduces the development work
3.Structured Programming
It is a programming style; and this style of programming is known by several
names: Procedural decomposition, Structured programming, etc. Structured
programming is not programming with structures but by using following types
of code structures to write programs:
1. Sequence of sequentially executed statements
2. Conditional execution of statements (i.e., ―if‖ statements)
3. Looping or iteration (i.e., ―for, do...while, and while‖ statements)
4. Structured subroutine calls (i.e., functions)
Analysis of Algorithm
After designing an algorithm, it has to be checked and its correctness
needs to be predicted; this is done by analyzing the algorithm. The algorithm
can be analyzed by tracing all step-by-step instructions, reading the algorithm
for logical correctness, and testing it on some data using mathematical
techniques to prove it correct. Another type of analysis is to analyze the
simplicity of the algorithm. That is, design the algorithm in a simple way so
that it becomes easier to be implemented. However, the simplest and most
Straight forward way of solving a problem may not be sometimes the best one.
Moreover there may be more than one algorithm to solve a problem. The choice
of a particular algorithm depends on following performance analysis and
measurements:
1. Space complexity
2. Time complexity
Space Complexity
Analysis of space complexity of an algorithm or program is the amount of
memory it needs to run to completion. Some of the reasons for studying space
complexity are:
4. Can be used to estimate the size of the largest problem that a program can
solve.
Time Complexity
Here, the more sophisticated method is to identify the key operations and
count such operations performed till the program completes its execution. A
key operation in our algorithm is an operation that takes maximum time
among all possible operations in the algorithm. Such an abstract, theoretical
approach is not only useful for discussing and comparing algorithms, but also
it is useful to improve solutions to practical problems. The time complexity can
now be expressed as function of number of key operations performed. Before
we go ahead with our discussions, it is important to understand the rate
growth analysis of an algorithm, as shown in Figure.
The function that involves ‗n‘ as an exponent, i.e., 2n, nn, n ! are called
exponential functions, which is too slow except for small size input function
where growth is less than or equal to nc,(where ‗c‘ is a constant) i.e.; n3, n2,
n log2n, n, log2 n are said to be polynomial. Algorithms with polynomial time
can solve reasonable sized problems if the constant in the exponent is small.
When we analyze an algorithm it depends on the input data, there are three
cases :
1. Best case
2. Average case
3. Worst case
In the best case, the amount of time a program might be expected to take on
best possible input data.
In the average case, the amount of time a program might be expected to take
on typical (or average) input data.
In the worst case, the amount of time a program would take on the worst
possible input configuration.
Frequency Count
Frequency count method can be used to analyze a program .Here we assume
that every statement takes the same constant amount of time for its execution.
Hence the determination of time complexity of a given program is is the just
matter of summing the frequency counts of all the statements of that program
Consider the following examples
In the example (a) the statement x++ is not contained within any loop
either explicit or implicit. Then its frequency count is just one. In
example (b) same element will be executed n times and in example (3) it
is executed by n2. From this frequency count we can analyze program
Growth of Functions and Asymptotic Notation
When we study algorithms, we are interested in characterizing them according
to their efficiency. We are usually interesting in the order of growth of the
running time of an algorithm, not in the exact running time. This is also
referred to as the asymptotic running time. We need to develop a way to talk
about rate of growth of functions so that we can compare algorithms.
Asymptotic notation gives us a method for classifying functions according to
their rate of growth.
Big-O Notation
Definition:
f(n) = O(g(n)) iff there are two positive constants c and n0 such that
|f(n)| ≤ c |g(n)| for all n ≥ n0 . If f(n) is nonnegative, we can simplify the last
condition to 0 ≤ f(n) ≤ c g(n) for all n ≥ n0 . then we say that “f(n) is big-O of
g(n).” . As n increases, f(n) grows no faster than g(n). In other words, g(n) is
Example: n 2 + n = O(n 3 )
Proof: • Here, we have f(n) = n 2 + n, and g(n) = n3
• Notice that if n ≥ 1, n ≤ n3 is clear.
Big-Θ notation
Definition:
f(n) = Θ(g(n)) iff there are three positive constants c1, c2 and n0 such that
c1|g(n)| ≤ |f(n)| ≤ c2|g(n)| for all n ≥ n0 . If f(n) is nonnegative, we can
simplify the last condition to 0 ≤ c1 g(n) ≤ f(n) ≤ c2 g(n) for all n ≥ n0 .then
we say that “f(n) is theta of g(n).” . As n increases, f(n) grows at the same
Example: n 2 + 5n + 7 = Θ(n 2 )
Proof: •
When n ≥ 1, n2 + 5n + 7 ≤ n2 + 5n2 + 7n2 ≤ 13n2
When n ≥ 0, n2 ≤ n2 + 5n + 7
Thus, when n ≥ 1
1n2 ≤ n2 + 5n + 7 ≤ 13n2
Thus, we have shown that n2 + 5n + 7 = Θ(n2 ) (by definition of
Big-Θ, with n0 = 1, c1 = 1, and c2 = 13.)