Ds 2
Ds 2
Organization of Data
Accessing methods
Degree of associativity
Processing alternatives for information
Algorithm + Data Structure = Program
Data structure study covers the following points
Amount of memory require to store.
Amount of time require to process.
Representation of data in memory.
Operations performed on that data.
Data organization
• Organization of data refers to classifying and organizing data to
make it more meaningful and usable.
• The collection of data you work with in a program have some kind
of structure or organization of data in Data Structures. No matter
how complex your data structures are they can be broken down into
two fundamental types.
1. Contiguous
2. Non-Contiguous
• In contiguous structures, terms of data are kept together in memory
(either RAM or in a file). An array is an example of a contiguous
structure.
• In contrast, items in a non-contiguous structure and scattered in
memory, but we linked to each other in some way. A linked list is
an example of a non-contiguous data structure.
Classification of Data Structure
Primitive Data Structure
• Primitive data structures are basic structures and are directly operated
upon by machine instructions.
• Primitive data structures have different representations on different
computers.
• The storage structure of these data structures may vary from one machine
to another.
• Integers, floats, character and pointers are examples of primitive data
structures.
Non primitive Data Type
1. Non-Primitive Data Structures are those data structures derived from
Primitive Data Structures.
2. These data structures can't be manipulated or operated directly by
machine-level instructions.
3. The focus of these data structures is on forming a set of data elements
that is either homogeneous or heterogeneous data types. Examples of
Array, File, strings, Unions, linked lists, stacks and queues etc. we can
divide these data structures into two sub-categories -
• Linear Data Structures
• Non-Linear Data Structures
Linear data structures
• A data structure is said to be linear if and only if there is a
adjacency relationship between the elements. or in sequence
memory locations.
• There are two ways to represent a linear data structure in
memory,
– Static memory allocation
– Dynamic memory allocation
• The possible operations on the linear data structure are:
Traversal, Insertion, Deletion, Searching, Sorting and
Merging.
• Examples of Linear data structure are arrays, linked list,
Stack and Queue etc.
Nonlinear data structures
• Nonlinear data structures are those data structure in which data
items are not arranged in a sequence.
• A data structure in which insertion and deletion is not possible in a
linear fashion and Elements are stored based on the hierarchical
relationship among the data.
• Examples of Non-linear Data Structure are Tree and Graph.
Tree: A tree can be defined as finite set of data items (nodes) in which
data items are arranged in branches and sub branches according
to requirement.
• Trees represent the hierarchical relationship between various
elements.
• Tree consist of nodes connected by edge, the node represented by
circle and edge represented by line.
Graph: Graph is a collection of nodes (Information) and connecting
edges (Logical relation) between nodes.
Operation on Data Structures
• Some of the common operations on Non-Primitive Data
Structure are:
1. Create: The create operation results in reserving memory for
program elements. This can be done by declaration statement.
Creation of data structure may take place either during
compile-time or run-time.
2. Destroy / delete: Destroy operation destroys memory space
allocated for specified data structure. free() function of
C language is used to destroy data structure.
• It is the process of removing an item from the structure.
3. Selection: Selection operation deals with accessing a
particular data within a data structure.
4. Updation: It updates or modifies the data in the data structure.
5. Traversal: Traversal is a process of visiting each and every
node of a list in systematic manner.
6. Searching: It is the process of finding the location of the
element with a given key value in a particular data structure
or finding the location of an element, which satisfies the
given condition.
7. Sorting: It is the process of arranging the elements of a
particular data structure in some form of logical order. The
order may be either ascending or descending or alphabetic
order depending on the data items present.
8. Merging: Merging is a process of combining the data items
of two different sorted list into a single sorted list.
9. Insertion: It is the process of adding a new element to the
structure. Most of the times this operation is performed by
identifying the position where the new element is to be
inserted.
10. Splitting: Splitting is a process of partitioning single list to
multiple list.
ALGORITHM
• A set of ordered steps or procedures necessary to solve a
problem.
• An algorithm is a sequence of unambiguous instructions for
solving a problem.
• An algorithm is a finite set of instructions that, it followed,
accomplishes a particular task. In additions, all algorithms
must satisfy the following criteria:
1. Input: Zero or more quantities are externally supplied.
2. Output: At least one quantities is produced.
3. Definiteness: Each instructions is clear and unambiguous.
4. Finiteness: if we trace out the instructions of an algorithm, then
for all cases, the algorithm terminates after a finite number of steps.
5. Effectiveness: Every instructions must be very basic so that it can
be carried out, in principle, by a person using only pencil and paper.
It is not enough that each operations be definite as in criterion 3; it
also must be feasible.
Basic Statements Used and Examples
Algorithm always begins with the word ‘Start’ and ends with
the word ‘Stop’.
Step wise solution is written in distinguished steps. This is as
shown in
example
Start
Step 1:
Step 2:
.
.
.
Step n:
End
Input Statement: Algorithm takes one or more inputs to process.
The statements used to indicate the input is Read a or Input b. for
example
Let a , b be the names of the Input
Input a or Read a
Input b or Read b
Where a and b are variable names.
Output Statements: Algorithm produces one or more outputs.
The statement used to show the output is output b or print b.
Syntax: Output variable name
Print variable name
For example output a or print a
output b or print b
where a and b are variable names.
Assignment Statements:
Processing can be done using the assignment statement.
i.e. L.H.S = R.H.S
On the L.H.S is a variable.
While on the R.H.S is a variable or a constant or an expression.
• The value of the variable, constant or the expression on the R.H.S is
assigned in L.H.S.
The L.H.S and R.H.S should be of the same type. Here ‘ = ’ is
called assignment operator.
For example Let the variables be x, y. The product be z this can be
represented by as Read x, y
Z=x*y
Order in which the steps of an algorithm are executed is divided in to
3 types namely
i) Sequential Order
ii) Conditional Order
iii) Iterative Order
Sequential Order
Each step is performed in serial fashion i.e. in a step by step procedure
for example
Write an algorithm to add two numbers.
Step 1 : Start
Step 2 : Read a
Step 3 : Read b
Step 4 : Add a , b
Step 5 : Store in d
Step 6 : Print d
Step 7 : End
Conditional Order
Based on fact that the given condition is met or not the algorithm
selects the next step to do. If statements are used when decision has
to be made. Different format of if statements are available they are
a) Syntax : if (condition)
Then {set of statements S1}
• Here condition means Boolean expressions which is TRUE or
FALSE.
• If condition is TRUE then the statements S1 is evaluated.
• If FALSE S1 is not evaluated Programme skips that section.
• For example
Write an algorithm to check equality of numbers.
Step 1 : Start
Step 2 : Read a, b
Step 3 : if a = =b, print numbers are equal to each other
Step 4 : End
b) if else (condition) statement:
if (condition)
Then {set of statements S1}
else
Then {set of statements S2}
Here if condition evaluates to true then S1 is executed otherwise
else statements are executed. For example Write an algorithm to
print the grade.
Step 1 : Start
Step 2 : Read marks
Step 3 : if marks greater than 60 is TRUE
print ’GRADE A’
Step 4 : Other wise
print ’GRADE B’
Step 5 : End
iii) Iterative Order
Here algorithm repeats the finite number of steps over and
over till the condition is not meet. Iterative operation is also
called as looping operation. For example
Add ‘n’ natural numbers till the sum is 5.
Step 1 : Start
Step 2 : set count to 0 and i=1
Step 3 : add i to count
Step 4 : i=i+1
Step 5 : if count is less than 5, then repeat steps 3 & 4
Step 6 : otherwise print count
Step 7 : End
The range of inputs for which an algorithm works has to be specified
carefully.
– The same algorithm can be represented in several different ways.
– Several algorithms for solving the same problem may exist.
– Algorithms for the same problem can be based on very different
ideas and can solve the problem with dramatically different speeds.
Every problem as we understand can be solved using different
methods or techniques.
Thus each method may be represented using an algorithm.
The important question to answer is How to choose the best algorithm?
• Once we develop an algorithm, it is always better to check whether
the algorithm is efficient or not. The efficiency of an algorithm
depends on the following factors:
1. Accuracy of the output
2. Robustness of the algorithm
3. User friendliness of the algorithm
4. Time required to run the algorithm
5. Space required to run the algorithm
The main aim of using a computer is to transform data from one
form to another. The algorithm describes the process of
transforming data
Efficiency of algorithms depends upon the data structures that are
selected for data representation.
The data structure has to be finally represented in the memory.
This is called as memory representation of data structures. While
selecting the memory representation of data structures it should
worth memory space and it should also be easy to access.
COMPLEXITY OF ALGORITHM
• Every algorithm we write should be analyzed before it is implemented
as a program.
• There are two main criteria's or reasons upon which we can judge an
algorithm. They are:
1. The correctness of the algorithm and
2. The simplicity of the algorithm
• The correctness of an algorithm can be analyzed by tracing the
algorithm with certain sample data and by trying to answer certain
questions such as.
1. Does the algorithm do what we want it to do?
2. Does it work correctly according to the original specifications of the task?
3. Does the algorithm work when the data structure used is full?
4. is there documentation that describes how to use it and how it works?
COMPLEXITY OF ALGORITHM
• In order to analyze this we will have to consider the time
requirements and the space requirements of the algorithm.
• These are the two parameters on which the efficiency of the
algorithms is measured.
• Space requirements are not a major problem today because
memory is very cheap. So time is the only criteria for
measuring efficiency of the algorithm as we have to maximize
the utilization of the CPU and response time to be minimized
to be faster.
Space complexity: The Space complexity of an algorithm
is the amount of main memory needed to run the program
till completion. Consider the following algorithms for
exchange two numbers:
• The first algorithm uses three variables a, b and tmp and the
second one take only two variables, so if we look from the
space complexity perspective the second algorithm is better
than the first one.
TIME COMPLEXITY
• The Time complexity of an algorithm is the amount of computer
time it needs to run the program till completion.
• To measure the time complexity in absolute time unit has the
following problems.
1. The time required for an algorithm depends on number of
instructions executed by the algorithm.
2. The execution time of an instruction depends on computer’s power.
Since, different computers take different amount of time for the
same instruction.
3. Different types of instructions take different amount of time on
same computer.
• Consider another algorithm for add n even number
• As we can observe in the above table that with the increase in the
value of n, the running time of 5n2 increases while the running time
of 6n and 12 also decreases. Therefore, it is observed that for
larger values of n, the squared term consumes almost 99% of the
time. The term n2 is contributing most of the time, so we can
eliminate the rest two terms.
• Therefore,
• f(n) = 5n2
• Here, we are getting the approximate time complexity whose
result is very close to the actual result. And this approximate
measure of time complexity is known as an Asymptotic
complexity.
• Here, we are not calculating the exact running time, we are
eliminating the unnecessary terms, and we are considering
the term which is taking most of the time.
• In mathematical analysis, asymptotic analysis of algorithm is a
method of defining the mathematical boundation of its
run-time performance.
• Using the asymptotic analysis, we can easily conclude the
average-case, best-case and worst-case scenario of an
algorithm.
• Order of growth: Order of growth is how the time of execution
depends on the length of the input. In the above example, we can
clearly see that the time of execution is linearly depends on the
length of the array.
• Order of growth will help us to compute the running time with ease.
We will ignore the lower order terms, since the lower order terms
are relatively insignificant for large input. We use different notation
to describe limiting behavior of a function.
Worst Case Analysis
• In the worst case analysis, we calculate upper bound on running time
of an algorithm.
• In that causes maximum number of operations to be executed.
• It defines the input for which the algorithm takes a huge time.
• For Linear Search, the worst case happens when the element to be
searched is not present in the array. When x is not present, the search
() functions compares it with all the elements of array [] one by one.
Therefore, the worst case time complexity of linear search would be.
.
Average Case Analysis
• In average case analysis, we take all possible inputs and
calculate computing time for all of the inputs. Sum all the
calculated values and divide the sum by total number of inputs.
• It takes average time for the program execution.
Best Case Analysis
• In the best case analysis, we calculate lower bound on running
time of an algorithm. We must know the case that causes
minimum number of operations to be executed.
• It defines the input for which the algorithm takes the lowest time
• In the linear search problem, the best case occurs when x is
present at the first location. The number of operations in worst
case is constant (not dependent on n). So time complexity in
the best case would be.
Asymptotic Analysis(O, Ω, ϴ) :
• When we calculate the complexity of an algorithm we often get
a complex polynomial. For simplify this complex polynomial we
use some notation to represent the complexity of an algorithm
call Asymptotic Notation. The function f and g are non
negative functions.
– Big oh Notation (O)
– Omega Notation (Ω)
– Theta Notation (θ)
Big oh Notation (O):
This notation provides an upper bound on a function which ensures that
the function never grows faster than the upper bound. So, it gives the
least upper bound on a function so that the function never grows faster than
this upper bound.
If f(n) and g(n) are the two functions defined for positive integers,
then f(n) = O(g(n)) as f(n) is big oh of g(n) or f(n) is on the order of g(n)) if
there exists positive constants c and no such that:
f(n)≤c.g(n) for all n≥no
This implies that f(n) does not grow faster than g(n), or g(n) is an upper
bound on the function f(n). In this case, we are calculating the growth rate
of the function which eventually calculates the worst time complexity
of a function, i.e., how worst an algorithm can perform.
Omega Notation (Ω)
1. It basically describes the best-case scenario which is
opposite to the big o notation.
2. It is the formal way to represent the lower bound of an
algorithm's running time. It measures the best amount of
time an algorithm can possibly take to complete or the best-
case time complexity.
3. It determines what is the fastest time that an algorithm can
run.
If f(n) and g(n) are the two functions defined for positive
integers, then
f(n) = Ω (g(n)) as f(n) is Omega of g(n) or f(n) is on the order
of g(n)) if there exists positive constants c and no such that:
f(n)>=c.g(n) for all n≥no and c>0
This implies that g(n) does not
grow faster than f(n) , or
g(n) is an lower bound on the
function f(n).
Theta Notation (θ)
• The theta notation mainly describes the average case scenarios.
• It represents the realistic time complexity of an algorithm. Every time,
an algorithm does not perform worst or best, in real-world problems,
algorithms mainly fluctuate between the worst-case and best-case,
and this gives us the average case of the algorithm.
• Let f(n) and g(n) be the functions of n where n is the steps required to
execute the program then:
f(n)= θ(g(n)) (read as “f of n is theta of g of n”) iff there exist positive
constant c1, c2, and n0 such that
c1.g(n)<=f(n)<=c2.g(n)
for all n, n>=n0.
The common name of few order notations is listed below:
• Therefore, 0 ≤ c ≤ 5. Hence, c = 5
• Now to determine the value of n 0
• 0 ≤ 5 ≤ 5 + 10/n0
• –5 ≤ 5 – 5 ≤ 5 + 10/n0 – 5
• –5 ≤ 0 ≤ 10/n0
• So n 0 = 1 as li m 1/n = 0
n
#include <stdio.h>
int main( )
{
int a = 5;
int *b;
b = &a;
printf ("value of a = %d\n", a);
printf ("value of a = %d\n", *(&a));
printf ("value of a = %d\n", *b); OUTPUT:
printf ("address of a = %u\n", &a); value of a = 5
printf ("address of a = %p\n", b); value of a = 5
printf ("value of b = address of a = %x\n", value of a = 5
b); address of a = 6487580
address of a = 000000000062FE1C
printf ("address of b = %X", &b);
value of b = address of a = 62fe1c
return 0; address of b = 62FE10
}
INITIALIZATION OF POINTER VARIABLE:-
The process of assigning the address of a variable to a pointer variable
is known as initialization. Once a pointer variable has been declare we
can use the assignment operator to initialize the variable. EXAMPLE:
int quantity;
int *p;
#include<stdio.h>
int main() p= &quantity;
{ int a;
int *ptr;
a = 10;
ptr = &a;
printf(“Value of ptr:%u”, ptr);
return (0);
}
Dereferencing Pointer:
• Dereferencing is an operation performed to access and manipulates data
contained in the memory location pointed to by a pointer.
• The operator * is used to dereference pointers. A pointer variable is
dereference when the unary operator *, in this case called the indirection
operator, is used as a prefix to the pointer variable.
#include<stdio.h>
int main()
{
int *p, v1, v2;
p=&v1;
*p=25;
*p+=10;
printf(" value of v1= %d\n", v1);
v2=*p; output
printf("value of v2= %d\n", v2); value of v1= 35
p=&v2; value of v2= 35
*p+=20; now the value of v2=55
printf(" now the value of v2=%d \n", v2);
}
Accessing the Address of a Variable through its Pointer:-
• Once a pointer has been assigned the address of variable, the question is how to
access a value of a variable through its pointer? This is done by using another
unary operator *(asterisk),Consider the following statements.
int ds, *p, n;
ds = 133;
p = &ds;
n = *p;
• Here we can seen that ds and n declare as integer and p pointer variable that point
to an integer. When the operator * is placed before the pointer variable in an
expression, the pointer returns the values of variable of which the pointer value is
the address. In this case, *p returns the value of variable ds. Thus the value of n
would be 133. The two statements
p =&ds;
n = *p; are equivalent to
n = *&ds; which is turn is equivalent to n = ds;
int main()
{ int *p, q;
q = 19;
p = &q; /* assign p the address of q */
printf(" Vlaue of q=%d\n",q);
printf("Contents of p=%d\n ", *p);
printf("Address of q stored in p=%d ",p);
return 0;
}
OUTPUT:
Vlaue of q=19
Contents of p=19
Address of q stored in p=6487572
Pointer Arithmetic
• The size of data type which the pointer variable points to is the
number of bytes accessed in memory.
• The size of the pointer variable is dependent on the data type of the
variable pointed by the pointer.
• Some arithmetic operations can be performed with pointers.
• C language supports the flowing arithmetic operators which can be
performed with pointers .They are
1. Pointer increment(++)
2. Pointer decrement(- -)
3. addition (+)
4. subtraction (-)
5. Subtracting two pointers of the same type
6. Comparison of pointers
Pointer increment and decrement:-
• Integer, float, char, double data type pointers can be incremented and
decremented. For all these data types both prefix and post fix
increment or decrement is allowed.
• Integer pointers are incremented or decremented in the multiples of
two. Similarly character by one, float by four and double pointers by
eight etc For 32-bit machine.
• Note that pointer arithmetic cannot be performed on void pointers,
since they have no data type associated with them.
• Let int *x;
x++ /*valid*/
++x /*valid*/
x-- /*valid*/
--x /*valid*/
• The Rule to increment the pointer is given as
new_address= current_address + i * size_of(data type)
• Where i is the number by which the pointer get increased.
• For 32-bit int variable, it will be incremented by 2 bytes.
• For 64-bit int variable, it will be incremented by 4 bytes.
• The Rule to decrement the pointer is given as
new_address= current_address - i * size_of(data type)
Note: If we have more than one pointer pointing to the same location,
then the change made by one pointer will be the same as another
pointer.
void main()
{
int *p1,x;
float *f1,f;
char *c1,c;
p1=&x;
f1=&f;
c1=&c;
printf("Memory address before increment:\n int=%p\n,float=%p\n, char=%p\n",p1,f1
,c1);
p1++;
f1++;
c1++;
printf("Memory address after increment:\n int=%p\n, float=%p\n,char=%p\n",p1,f1,
c1);
}
C Pointer Addition
• We can add a value to the pointer variable. The formula of adding
value to pointer is given
new_address= current_address + (number * size_of(data type))
#include<stdio.h>
int main(){
int number=50;
int *p; //pointer to int
p=&number; //stores the address of number variable
printf("Address of p variable is %u \n", p);
p=p+3; //adding 3 to pointer variable
printf("After adding 3: Address of p variable is %u \n", p);
return 0;
Output }
• Here, the pointer variable p1 contains the address of the pointer variable p2.
This is known as multiple indirections.
• A variable that is a pointer to a pointer must be declared using additional
indirection operator symbols in front of the name.
• The general syntax for declaring a pointer to pointer is:
<data type> **< pointer to pointer variable name>.
Example: int **p1;
int main ()
{
intvar;
int *ptr;
int **pptr;
var = 3000;
ptr = &var; /* take the address of var */
pptr = &ptr;/* take the address of ptr using address of operator &*/
printf("Value of var = %d\n", var );
printf("Value available at *ptr = %d\n", *ptr );
printf("Value available at **pptr = %d\n", **pptr); /* take the value using pptr */
}
OUTPUT:-
Value of var = 3000
Value available at *ptr = 3000
Value available at **pptr = 3000
4000 3000 2000
3000 2000 5
ppa pa a
int main()
{
char* arr[10] = {"Geek", "Geeks ", "Geekfor"};
print(arr, 3);
return 0;
}
C Array
• An array is defined as the collection of similar type of data items
stored at contiguous memory locations.
• Arrays are the derived data type in C programming language
which can store the primitive type of data such as int, char, double,
float, etc.
• It also has the capability to store the collection of derived data
types, such as pointers, structure, etc.
• The array is the simplest data structure where each data element
can be randomly accessed by using its index number. example
Declaration One Dimensional Array: arrays must be declared
explicitly before they are used. The general form of declaration is:
Data_type Array_Name[Array_Size];
int marks[5];
• Here, int is the data_type, marks are the array_name, and 5 is
the array_size.
Initialization of One Dimensional Array
• Once an array is declared, it must be initialized. Otherwise array
will contain the garbage values. There are two different ways in
which we can initialize the static array
1. Compile time
2. Run time
• Compile Time Initialization
1. Initializing Arrays during Declaration
data-type array-name[size] = { list of values separated by comma };
For example:
int marks[5]={90, 82, 78, 95, 88};
• Note that if the number of values provided is less than the number of
elements in the array, the un-assigned elements are filled with zeros.
#include <stdio.h>
int main()
{ int arr1[]={1,2,3,4,5};
int arr2[]={0,2,4,6,8};
int arr3[]={1,3,5,7,9};
int *parr[3] = {arr1, arr2, arr3};
int i;
for(i = 0;i<3;i++)
printf(" %d", *parr[i]);
return 0;
} Output 1 0 1
• parr[0] stores the base address of arr1 (or, &arr1[0]). So writing *parr[0]
will print the value stored at &arr1[0]. Same is the case with *parr[1] and
*parr[2]
Pointer to an Array:
• We have seen that a pointer that pointed to the first element of array. We
can also declare a pointer that can point to the whole array.
• For example int (*ptr)[10]; here ptr is point to an array of 10 integer.
• Note that it is necessary to enclose the pointer name inside parentheses.
• We know that the pointer arithmetic is performed relative to the base size,
so if we write ptr++, then the pointer ptr will be shifted forward by 20 bytes.
#include <stdio.h>
int main()
{
int arr[] = { 3, 5, 6, 7, 9 };
int *p = arr;
int (*ptr)[5] = &arr;
printf("p = %p, ptr = %p\n", p, ptr);
printf("*p = %d, *ptr = %p *ptr[0]=%d\n", *p, *ptr, *ptr[0]);
printf("sizeof(*p) = %lu, sizeof(*ptr) = %lu\n", sizeof(*p), sizeof(*ptr));
return 0;
} Output: p = 000000000062FDF0, ptr = 000000000062FDF0
*p = 3, *ptr = 000000000062FDF0 *ptr[0]=3
sizeof(*p) = 4, sizeof(*ptr) = 20
#include <stdio.h>
int main()
{ int *p,i;
int x[5] = { 1, 2, 3, 4, 5 };;
int (*ptr)[5];
p=x; // Points to 0th element of the x.
ptr=&x; // Points to the whole array of x.
printf(" p=%u, ptr=%d\n", p, ptr);
p++;
ptr++;
printf("\n p=%u, ptr=%d\n", p, ptr);
}
Output:
p=6487536, ptr=6487536
p=6487540, ptr=6487556
#include <stdio.h>
int main()
{
int(*p)[5]; // Pointer to an array of five numbers
int b[5] = { 1, 2, 3, 4, 5 };
int i = 0;
p = &b; // Points to the whole array b
for (i = 0; i < 5; i++)
printf("%d\t", *(*p+ i));
return 0;
}
Output : 1 2 3 4 5
Two Dimensional Array: It can be defined as ‘an Array which holds
two subscript is known as two Dimensional Array. Generally 2-D
array is called as Matrix.
Declaration One Dimensional Array
Data_Type Array_Name[row size][column size];
For Example: int x[3][4];
• Where int is the type of the array, x is the array name and row size is
the number of rows and column size the number of column.
• To find the total number of element of an array, multiply the total
number of row with the total number of column.
For Example: int a[3][4];
#include <stdio.h>
int main ()
{
int a[5][2] = { {0,0}, {1,2}, {2,4}, {3,6},{4,8}}; /* an array with 5 rows and 2 columns*/
int i, j;
/* output each array element's value */
for ( i = 0; i < 5; i++ )
{
for ( j = 0; j < 2; j++ )
{
printf("a[%d][%d] = %d\n", i, j, a[i][j] );
}
}
return 0;
}
Write a program to print a matrix of order m * n where
void main( )
{ clrscr( );
int mat[50][50] i, j, m, n;
printf(“Enter the number of Rows”);
scanf(“%d”,&m);
printf(“Enter the number of Columns”);
scanf(“%d”,&n);
printf(“Enter the element for the Matrix”);
for(i=0;i<m;i++)
{
for(j=0;j<n;j++)
{
scanf(“%d”, &mat[i][j]);
}
}
printf(“The Matrix is:”);
for(i=0;i<m;i++)
{
for(j=0;j<n;j++)
{
printf(“\t%d”,mat[i][j]);
}
printf(“\n”);
}
getch( );
}
Row Major ordering
• In row major ordering, all the rows of the 2D array are stored into the
memory contiguously. Considering the array shown in the image, its
memory allocation according to row major order is shown as follows.
• First, the 1st row of the array is stored into the memory completely, then
the 2nd row of the array is stored into the memory completely and so on till
the last row.
Calculating the Address of the random element of a 2D array
• By Row Major Order: If array is declared by a[m][n] where m is the
number of rows while n is the number of columns, then address of an
element a[i][j] of the array stored in row major order is calculated as,
Address(a[i][j]) = B. A. + ( n*( i-l1)+( j-l2) )* size
where, B. A. is the base address or the address of the first element of
the array a[0][0] ,
n=(u2 –l2+1), and l1 is lower bound of row, l2 is lower bound of column,
u1 is upper bound of row, u2 is upper bound of column.
Example : a[10...30, 55...75], base address of the array (BA) = 0, size of
an element = 4 bytes . Find the location of a[15][68].
l1=10, l2=55,
u1=30, u2=75, n=(75-55+1)=21
Address(a[15][68]) = 0 + ( 21(15 - 10) * + (68 - 55)) * 4
= (21*5 + 13) * 4
= 118 * 4
= 472 answer
Column Major ordering
• According to the column major ordering, all the columns of the 2D array are
stored into the memory contiguously. The memory allocation of the array
which is shown in the above image is given as follows.
• first, the 1st column of the array is stored into the memory completely, then
the 2d row of the array is stored into the memory completely and so on till
the last column of the array.
Calculating the Address of the random element of a 2D array
• By Column major order: If array is declared by a[m][n] where m is
the number of rows while n is the number of columns, then address
of an element a[i][j] of the array stored in row major order is
calculated as,
*(*(arr+2)+3)
*(arr+2) *(arr+2)+3
#include<stdio.h>
int main()
{
int arr[3][4] = {{10, 11, 12, 13}, {20, 21, 22, 23}, {30, 31, 32, 33} };
int (*ptr)[4];
ptr = arr;
printf("%p %p %p\n", ptr, ptr + 1, ptr + 2);
printf("%p %p %p\n", *ptr, *(ptr + 1), *(ptr + 2));
printf("%d %d %d\n", **ptr, *(*(ptr + 1) + 2), *(*(ptr + 2) + 3));
printf("%d %d %d\n", ptr[0][0], ptr[1][2], ptr[2][3]);
return 0;
}
Output:
0x7ffead967560 0x7ffead967570 0x7ffead967580
0x7ffead967560 0x7ffead967570 0x7ffead967580
10 22 33
10 22 33
#include <stdio.h>
main( )
{ int s[4][2] = {{ 12, 56 }, { 12, 33 }, { 14, 80 }, { 13, 78 }} ;
int i, j ;
for ( i = 0 ; i <= 3 ; i++ )
{ printf("\n Address of %d th row one D array=%u %u\n", i, s[i], *(s+i));
printf ( "\n" ) ;
for ( j = 0 ; j <= 1 ; j++ )
printf ( "S[%d][%d]=%d ", i, j,*( *( s + i ) + j ) ) ;
}}
Output:
Address of 0 th row one D array=6487536 6487536
S[0][0]=12 S[0][1]=56
Address of 1 th row one D array=6487544 6487544
S[1][0]=12 S[1][1]=33
Address of 2 th row one D array=6487552 6487552
S[2][0]=14 S[2][1]=80
Address of 3 th row one D array=6487560 6487560
S[3][0]=13 S[3][1]=78
• The golden rule to access an element of a two-dimensional array can
be given as arr[i][j] = (*(arr+i))[j] = *((*arr+i))+j) = *(arr[i]+j)
• Therefore, arr[0][0] = *(arr)[0] = *((*arr)+0) = *(arr[0]+0)
[0] [1]
MULTI-DIMENSIONAL ARRAYS
• A multi-dimensional array in simple terms is an array of arrays. As we have
one index in a one dimensional array, two indices in a two-dimensional
array, in the same way, we have n indices in an n-dimensional array or
multi-dimensional array.
• An n-dimensional m1* m2* m3 *…* mn array is a collection of m1*m2* m3*…*
mn elements.
• In a multi-dimensional array, a particular element is specified by using n
subscripts as A[I1 ][I2 ][I3 ]...[In ], where I1 <=M1, I2<=M2, I3<=M3, ... In<=Mn.
• For example a three-dimensional array has three pages, four rows, and two
columns.
#include<stdio.h> printf("print the values in array: \n");
int i,j,k; //variables for nested for loops for(i=1;i<=2;i++)
int main() {
{ for(j=1;j<=3;j++)
int arr[2][3][4]; //array declaration {
printf("enter the values in the array: \n"); for(k=1;k<=4;k++)
for(i=1;i<=2;i++) //represents block {
{ printf("%d ",arr[i][j][k]);
for(j=1;j<=3;j++) //represents rows }
{ printf("\n");
for(k=1;k<=4;k++) //represents columns }
{ printf("\n");
printf("the value at arr[%d][%d][%d]: ", i, j, k }
); return 0;
scanf("%d", &arr[i][j][k]); }
}
}
}
• To find the address of any element in 3-Dimensional arrays there are the
following two ways-
– Row Major Order
– Column Major Order
Row Major Order:
• Assuming 3 dimensions: depth, rows, and columns. The following diagram show
s the memory layout of a 3D array with , in row-major:
• Data type Array[depth][rows-size][columns-size];
• To find the address of the element using row-major order, use the following formula:
Address of A[i][j][k] = B + W *(M* N * (i-x) + N*(j-y) + (k-z))
• Here: B = Base Address (start address)
W = Weight (storage size of one element stored in the array)
M = Row (total number of rows)
N = Column (total number of columns)
P = Width (total number of cells depth-wise)
x = Lower Bound of Row
y = Lower Bound of Column
z = Lower Bound of Width
• Example: Given an array, arr[1:9, -4:1, 5:10] with a base value of 400 and the size of each
element is 2 Bytes in memory find the address of element arr[5][-1][8] with the help of row-
major order?
• Solution:
Block Subset of an element whose address to be found I = 5
Row Subset of an element whose address to be found J = -1
Column Subset of an element whose address to be found K = 8
Base address B = 400
Storage size of one element store in any array(in Byte) W = 2
Lower Limit of blocks in matrix x = 1
Lower Limit of row/start row index of matrix y = -4
Lower Limit of column/start column index of matrix z = 5
M (row) = Upper Bound – Lower Bound + 1 = 1 – (-4) + 1=> M = 6
N (Column)= Upper Bound – Lower Bound + 1 = 10 – 5 + 1 => N = 6
• Formula used: Address of[I][J][K] =B + W (M * N(i-x) + N *(j-y) + (k-z))
Address of arr[5][-1][8] = 400 + 2 * {[6 * 6 * (5 – 1)] + 6 * [(-1 + 4)]} + [8 – 5]
= 400 + 2 * (6*6*4)+(6*3)+3 = 730
To find the address of the element using column-major order, use the following formula:
Address of A[i][j][k]=B+W×(N×M×(k−z)+M×(j−y)+(i−x))
• Here: B = Base Address (start address)
W = Weight (storage size of one element stored in the array)
M = Row (total number of rows)
N = Column (total number of columns)
P = Width (total number of cells depth-wise)
x = Lower Bound of block (first subscipt)
y = Lower Bound of Row
z = Lower Bound of Column
Example: Given an array arr[1:8, -5:5, -10:5] with a base value of 400 and the size of each
element is 4 Bytes in memory find the address of element arr[3][3][3] with the help of colum
n-major order?
• Solution:
Row Subset of an element whose address to be found I = 3
Column Subset of an element whose address to be found J = 3
Block Subset of an element whose address to be found K = 3
Base address B = 400
Storage size of one element store in any array(in Byte) W = 4
Lower Limit of blocks in matrix x = 1
Lower Limit of row/start row index of matrix y = -5
Lower Limit of column/start column index of matrix z = -10
M (row)= Upper Bound – Lower Bound + 1 = 8 – 1 + 1=> M = 8
N (column)= Upper Bound – Lower Bound + 1 = 5 – (-5 ) + 1=> N = 11
• Formula used: Address of A[i][j][k]=B+W×(N×M×(k−z)+M×(j−y)+(i−x))
Address of arr[3][3][3] = 400 + 4 * ((11*8*(3-(-10)+ 8*(3-(-5)+ (3-1))
= 400 + 4 * ((88 * 13+ 8 * 8 + 2) = 5240
Pointers and Three Dimensional Arrays
• In a three dimensional array we can access each element by using three subscripts.
int arr[2][3][2] = { {{5, 10}, {6, 11}, {7, 12}}, {{20, 30}, {21, 31}, {22, 32}} };
• We can consider a three dimensional array to be an array of 2-D array i.e. each
element of a 3-D array is considered to be a 2-D array. The 3-D array arr can be
considered as an array consisting of two elements where each element is a 2-D
array. The name of the array arr is a pointer to the 0th 2-D array.
• beg = 0
• end = 8
• mid = (0 + 8)/2 = 4. So, 4 is the mid of the array.
Recursive implementation of binary search
binary_search(A, ITEM ,low,high):
if (low > high)
return False
else
mid = (low + high)/2
if (A[mid] == ITEM)
return True
elseif (A[mid] < ITEM)
return binary_search(A, ITEM ,mid+1,high)
else
return binary_search(A, ITEM ,low,mid-1)
Time Complexity of Binary Search Algorithm:
• Best case is when the element is at the middle index of the array. It takes only one
comparison to find the target element. So the best case complexity is O(1).
Average Case Time Complexity of Binary Search:
• Case1: Element is present in the array
• Case2: Element is not present in the array.
The average case arises when the target element is present in some location other
than the central index
• The time complexity depends on the number of comparisons to find the desired element.
• In the following iterations, the size of the subarray is reduced using the result of the
previous comparison.
– Initial length of array =n
– Iteration 1 – Length of array =n/2
– Iteration 2 – Length of array =(n/2)/2=n/22
– Iteration k – Length of array =n/2k
• After k iterations, the size of the array becomes 1 (narrowed down to the first element
or last element only).
• Length of array =n/2k=1 => n=2k
Applying log function on both sides:
=> log2(n)=log2(2k)
=> log2(n)=k log22=k
=> k=log2(n)
• Therefore, the overall Average Case Time Complexity of Binary Search is O(logn).
Worst Case Time Complexity of Binary Search
The worst-case scenario of Binary Search occurs when the target element is
the smallest element or the largest element of the sorted array.
Since the target element is present in the first or last index, there are logn
comparisons in total.
Therefore, the Worst Case Time Complexity of Binary Search is O(logn).
Case Time Complexity