0% found this document useful (0 votes)
94 views

Data Structures - Abstract Data Types

An algorithm is a set of steps to solve a problem that is finite, unambiguous, and efficient. An algorithm's efficiency is measured by its time and space complexity, which analyze how the resources required grow with the size of the input. Time complexity measures the number of steps, while space complexity measures the memory usage. Common complexities include constant, linear, quadratic, and logarithmic. The best, worst, and average cases analyze an algorithm's performance under different conditions. Asymptotic notations like Big-O describe the long-term growth rate of an algorithm.

Uploaded by

Radha Sundar
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
94 views

Data Structures - Abstract Data Types

An algorithm is a set of steps to solve a problem that is finite, unambiguous, and efficient. An algorithm's efficiency is measured by its time and space complexity, which analyze how the resources required grow with the size of the input. Time complexity measures the number of steps, while space complexity measures the memory usage. Common complexities include constant, linear, quadratic, and logarithmic. The best, worst, and average cases analyze an algorithm's performance under different conditions. Asymptotic notations like Big-O describe the long-term growth rate of an algorithm.

Uploaded by

Radha Sundar
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 11

What is an Algorithm ?

An algorithm is a finite set of instructions or logic, written in order, to accomplish a certain


predefined task. Algorithm is not the complete code or program, it is just the core
logic(solution) of a problem, which can be expressed either as an informal high level
description as pseudocode or using a flowchart.

Every Algorithm must satisfy the following properties:

1. Input- There should be 0 or more inputs supplied externally to the algorithm.


2. Output- There should be atleast 1 output obtained.
3. Definiteness- Every step of the algorithm should be clear and well defined.
4. Finiteness- The algorithm should have finite number of steps.
5. Correctness- Every step of the algorithm must generate a correct output.

Algorithm Analysis

Efficiency of an algorithm can be analyzed at two different stages, before implementation and
after implementation.

A priori analysis- this is theoretical analysis. Efficiency of algorithm is measured by


assuming that all other factors e.g processor speed, are constant and have no effect on
implementation.

An algorithm is said to be efficient and fast, if it takes less time to execute and consumes less
memory space. The performance of an algorithm is measured on the basis of following
properties :

1. Time Complexity
2. Space Complexity

space Complexity of Algorithms


Whenever a solution to a problem is written some memory is required to complete. For any
algorithm memory may be used for the following:

1. Variables (This include the constant values, temporary values)


2. Program Instruction
3. Execution

Space complexity is the amount of memory used by the algorithm (including the input values to the
algorithm) to execute and produce the result.

When we design an algorithm to solve a problem, it needs some computer memory to complete its
execution. For any algorithm, memory is required for the following purposes...
1. Memory required to store program instructions

2. Memory required to store constant values

3. Memory required to store variable values

4. And for few other things

Sometime Auxiliary Space is confused with Space Complexity. But Auxiliary Space is the
extra space or the temporary space used by the algorithm during it's execution.

Space Complexity = Auxiliary Space + Input space

Memory Usage while Execution

While executing, algorithm uses memory space for three reasons:

1. Instruction Space
It's the amount of memory used to save the compiled version of instructions.
2. Environmental Stack
Sometimes an algorithm(function) may be called inside another algorithm(function).
In such a situation, the current variables are pushed onto the system stack, where they
wait for further execution and then the call to the inside algorithm(function) is made.

For example, If a function A() calls function B() inside it, then all the variables of the
function A() will get stored on the system stack temporarily, while the function B() is
called and executed inside the function A().

3. Data Space
Amount of space used by the variables and constants.

But while calculating the Space Complexity of any algorithm, we usually consider only
Data Space and we neglect the Instruction Space and Environmental Stack.

Calculating the Space Complexity

For calculating the space complexity, we need to know the value of memory used by different
type of datatype variables, which generally varies for different operating systems, but the
method for calculating the space complexity remains the same.

Type Size

bool, char, unsigned char, signed char, __int8 1 byte

__int16, short, unsigned short, wchar_t, __wchar_t 2 bytes


float, __int32, int, unsigned int, long, unsigned long 4 bytes

double, __int64, long double, long long 8 bytes

Now let's learn how to compute space complexity by taking a few examples:

{
int z = a + b + c;
return(z);
}
Example 1

In the above example 1, variables a, b, c and z are all integer types, hence they will take up 4
bytes each, so total memory requirement will be (4(4) + 4) = 20 bytes, this additional 4
bytes is for return value. And because this space requirement is fixed for the above example,
hence it is called Constant Space Complexity.

Linear Space and Time Complexity


Let's have another example 2, this time a bit complex one,

// n is the length of array a[]


int sum(int a[], int n)
{
int x = 0; // 4 bytes for x
for(int i = 0; i < n; i++) // 4 bytes for i
{
x = x + a[i];
}
return(x);
}
Example 2

 In the above code, 4*n bytes of space is required for the array a[] elements.
 4 bytes each for x, n, i and the return value.

Hence the total memory requirement will be (4n + 12), which is increasing linearly with the
increase in the input value n, hence it is called as Linear Space Complexity.

The running time of the for loop is directly proportional to ‘n’. when ‘n’ doubles, so does the
running time. For example
for(i=0;i<N;i++)
{
for(j=0;j<N;j++)
{
Statement;
}
}

Similarly, we can have quadratic and other complex space complexity as well, as the
complexity of an algorithm increases.

But we should always focus on writing algorithm code in such a way that we keep the space
complexity minimum.

Consider another example 3

algorithm abc(a,b,c)
{
Return a+b+b*c+(a+b-c)/(a+b)+4.0;
}
Example 3

In the above case the considering a,b,c are integer and return value is double we can calculate that
for a,b,c it takes 4 bytes each and for return value it takes 8 bytes. So we can say it takes
approximately (12+8=20 bytes)

consider another example 4


Algorithm Sum(a,n)
{
S=0.0;
For i=1 to n do
s=s+a[i];
return s;
}
Example 4

In the above algorithm n elements are summed. The space needed by n is 4 bytes, the space needed
by a[] is n*a[]. In this since the data stored in ‘a’ is floating point the space needed is n*8 bytes
Time complexity
The time T(P) taken by a program P is the sum of the compile time and the run (or execution ) time.
Consider the example 5
Linear Time Complexity
for(int i=1;i<=n;i++)
{
sum=i*n;
}
cout<<sum;

Example 5

in the above codes the loop will execute for n number of times to find the square of the given ‘n’
value.
Consider the example 6 where it can be written as
Example 6
return n*n;

in the above in just in one single execution the square of a number can be found.
In the previous example the loop execution depends upon the value ‘n’. If the value is large , then
the number of times the loop executes also will increase, thus increasing the execution and
compilation time. Whereas in the second example whatever the value of ‘n’ it is only single
execution.
The time complexity in example 1 will be Linear. The running time of the loop is directly proportional
to ‘n’.
Quadratic Time Complexity
Consider example 7
for(i=0;i<N;i++)
{
for(j=0;j<N;j++)
{

Statement;
}
}
Example 7

This time, the time complexity for the above code will be Quadratic . The running time of the two
loops is proportional to the square of ‘N’. When N doubles , the running time increases by N*N.
Logarithmic Tme Complexity

Consider example 8
while(low <= high)
{
mid = (low + high) / 2;
if (target < list[mid])
high = mid - 1;
else if (target > list[mid])
low = mid + 1;
else break;
}
Example 8

In the above algorithm, it divides the search space into half. This algorithm will have a logarithmic
time complexity. The running time of the algorithm is proportional to the number of times N can be
divided by 2(N is high-low here). This is because the algorithm divides the working area in half with
each iteration.

Types of Notations for Time Complexity


Now we will discuss and understand the various notations used for Time Complexity.

1. Big Oh denotes "fewer than or the same as" <expression> iterations.


2. Big Omega denotes "more than or the same as" <expression> iterations.
3. Big Theta denotes "the same as" <expression> iterations.
4. Little Oh denotes "fewer than" <expression> iterations.
5. Little Omega denotes "more than" <expression> iterations.

Best, worst and Average-case complexity


The worst-case complexity of the algorithm is the function defined by the maximum number of
steps taken on any instance of size n. It is the slowest time taken to complete, with given input.
The best-case complexity of the algorithm is the function defined by the minimum number of steps
taken on any instance of size n. It can be the fastest time taken to complete, with optimal inputs
chosen. For example for a simple linear search on a list if the element to be searched is present as
the first element in the list
Finally, the average-case complexity of the algorithm is the function defined by the average number
of steps taken on any instance of size n. This is nothing but arithmetic mean.

Rate of Growth: Big O notation


Suppose M is an algorithm, and suppose n is the size of the input data. The rate of growth of the
algorithm can be expressed as in the table 1

n Log n n n log n n2 n3 2n
5 3 5 15 25 125 32
10 4 10 40 100 103 103
100 7 100 700 104 106 1030
Asymptotic Notations
Following are commonly used asymptotic notations used in calculating running time
complexity of an algorithm.

 Ο Notation
 Ω Notation
 θ Notation

Big 'O' Notation

The Ο(n) is the formal way to express the upper bound of an algorithm's running time. It
measures the worst case time complexity or longest amount of time an algorithm can possibly
take to complete.

Omega(Ω) Notation
The Ω(n) is the formal way to express the lower bound of an algorithm's running time. It measures
the best case time complexity or best amount of time an algorithm can possibly take to complete.

Theta(θ) Notation
The θ(n) is the formal way to express both the lower bound and upper bound of an algorithm's
running time.

Characterstic Description
In Linear data structures,the data items are arranged in a linear sequence.
Linear
Example: Array
In Non-Linear data structures,the data items are not in sequence. Example:
Non-Linear
Tree, Graph
In homogeneous data structures,all the elements are of same type. Example:
Homogeneous
Array
Non- In Non-Homogeneous data structure, the elements may or may not be of the
Homogeneous same type. Example: Structures
Static data structures are those whose sizes and structures associated
Static
memory locations are fixed, at compile time. Example: Array
Dynamic structures are those which expands or shrinks depending upon the
Dynamic program need and its execution. Also, their associated memory locations
changes. Example: Linked List created using pointers
Primitive data structures:
Composite data structures

Data structures

Definition: data structure is a representation of the logical relationship existing between individual
elements of data. It can be stated that data structure is a way of organizing all data items that
considers not only the elements stored but also their relationship to each other.

Primitive data types

primitive data structures are the basic data structures that directly operate upon the machine
instructions. They are the built-in and basic data type of the programming languages. Example
integers ,floating point numbers, characters, strings, pointers,boolean.

Non-primitive or Composite data structures:

A composite data type is any data type that is composed of primitive data types as its based type. In other
words, it is an aggregate of primitive types. These are user defined and are derived from basic data
types. Examples arrays, structures, class, records, lists, files etc
Arrays:

An array is defined as a set of finite number of homogenous elements or data


items. It means an array can contain one type of data only, either all integers,
all floating point numbers or all characters.
Exp:- int a[10]

Array declaration:

int a [10]
data type variable name size of array

Concepts of array:

 The individual element of an array can be accessed by specifying name of the array, followed
by index of subscript inside square bracket.
 An array index starts from 0 to n-1. For example, the elements of an array x[n] containing n
elements are denoted by x[0],x[1],… x[n-1], where 0 is the lower bound and n-1 is the upper
bound of the array. So to access the 10th element the array can be accessed as x[9] as the
first element starts from x[0].
 The elements of array will always be stored in consecutive memory location.
 Arrays can always be read or written through loop. In case of one dimensional array it
required one loop for reading and one loop for writing and in case of two dimensional array
it requires two loops for each operation
Operations performed on array
 Creation of an array
 Traversing of an array
 Insertion of new elements
 Deletion of required elements
 Modification of an element
 Merging of arrays
 Search for an element
 Reversing the elements of an array

Creation of an array: after creating an array like this:

a[0] a[1] a[2] a[3] a[4] a[5]


3 5 9 15 18 20
5
Elements in an array with their indices

You might also like