Programming With Data Structure
Programming With Data Structure
Organization of Data
Accessing methods
Degree of associativity
Processing alternatives for information
Algorithm + Data Structure = Program
Data structure study covers the following points
Amount of memory require to store.
Amount of time require to process.
Representation of data in memory.
Operations performed on that data.
Data organization
• Organization of data refers to classifying and organizing data to
make it more meaningful and usable.
• The collection of data you work with in a program have some kind
of structure or organization of data in Data Structures. No matter
how complex your data structures are they can be broken down into
two fundamental types.
1. Contiguous
2. Non-Contiguous
• In contiguous structures, terms of data are kept together in memory
(either RAM or in a file). An array is an example of a contiguous
structure.
• In contrast, items in a non-contiguous structure and scattered in
memory, but we linked to each other in some way. A linked list is
an example of a non-contiguous data structure.
Classification of Data Structure
Primitive Data Structure
• Primitive data structures are basic structures and are directly operated
upon by machine instructions.
• Primitive data structures have different representations on different
computers.
• The storage structure of these data structures may vary from one machine
to another.
• Integers, floats, character and pointers are examples of primitive data
structures.
Non primitive Data Type
1. Non-Primitive Data Structures are those data structures derived from
Primitive Data Structures.
2. These data structures can't be manipulated or operated directly by
machine-level instructions.
3. The focus of these data structures is on forming a set of data elements
that is either homogeneous or heterogeneous data types. Examples of
Array, File, strings, Unions, linked lists, stacks and queues etc. we can
divide these data structures into two sub-categories -
• Linear Data Structures
• Non-Linear Data Structures
Linear data structures
• A data structure is said to be linear if and only if there is a
adjacency relationship between the elements. or in sequence
memory locations.
• There are two ways to represent a linear data structure in
memory,
– Static memory allocation
– Dynamic memory allocation
• The possible operations on the linear data structure are:
Traversal, Insertion, Deletion, Searching, Sorting and
Merging.
• Examples of Linear data structure are arrays, linked list,
Stack and Queue etc.
Nonlinear data structures
• Nonlinear data structures are those data structure in which data
items are not arranged in a sequence.
• A data structure in which insertion and deletion is not possible in a
linear fashion and Elements are stored based on the hierarchical
relationship among the data.
• Examples of Non-linear Data Structure are Tree and Graph.
Tree: A tree can be defined as finite set of data items (nodes) in which
data items are arranged in branches and sub branches according
to requirement.
• Trees represent the hierarchical relationship between various
elements.
• Tree consist of nodes connected by edge, the node represented by
circle and edge represented by line.
Graph: Graph is a collection of nodes (Information) and connecting
edges (Logical relation) between nodes.
Operation on Data Structures
• Some of the common operations on Non-Primitive Data
Structure are:
1. Create: The create operation results in reserving memory for
program elements. This can be done by declaration statement.
Creation of data structure may take place either during
compile-time or run-time.
2. Destroy / delete: Destroy operation destroys memory space
allocated for specified data structure. free() function of
C language is used to destroy data structure.
• It is the process of removing an item from the structure.
3. Selection: Selection operation deals with accessing a
particular data within a data structure.
4. Updation: It updates or modifies the data in the data structure.
5. Traversal: Traversal is a process of visiting each and every
node of a list in systematic manner.
6. Searching: It is the process of finding the location of the
element with a given key value in a particular data structure
or finding the location of an element, which satisfies the
given condition.
7. Sorting: It is the process of arranging the elements of a
particular data structure in some form of logical order. The
order may be either ascending or descending or alphabetic
order depending on the data items present.
8. Merging: Merging is a process of combining the data items
of two different sorted list into a single sorted list.
9. Insertion: It is the process of adding a new element to the
structure. Most of the times this operation is performed by
identifying the position where the new element is to be
inserted.
10. Splitting: Splitting is a process of partitioning single list to
multiple list.
ALGORITHM
• A set of ordered steps or procedures necessary to solve a
problem.
• An algorithm is a sequence of unambiguous instructions for
solving a problem.
• An algorithm is a finite set of instructions that, it followed,
accomplishes a particular task. In additions, all algorithms
must satisfy the following criteria:
1. Input: Zero or more quantities are externally supplied.
2. Output: At least one quantities is produced.
3. Definiteness: Each instructions is clear and unambiguous.
4. Finiteness: if we trace out the instructions of an algorithm, then
for all cases, the algorithm terminates after a finite number of steps.
5. Effectiveness: Every instructions must be very basic so that it can
be carried out, in principle, by a person using only pencil and paper.
It is not enough that each operations be definite as in criterion 3; it
also must be feasible.
Basic Statements Used and Examples
Algorithm always begins with the word ‘Start’ and ends with
the word ‘Stop’.
Step wise solution is written in distinguished steps. This is as
shown in
example
Start
Step 1:
Step 2:
.
.
.
Step n:
End
Input Statement: Algorithm takes one or more inputs to process.
The statements used to indicate the input is Read a or Input b. for
example
Let a , b be the names of the Input
Input a or Read a
Input b or Read b
Where a and b are variable names.
Output Statements: Algorithm produces one or more outputs.
The statement used to show the output is output b or print b.
Syntax: Output variable name
Print variable name
For example output a or print a
output b or print b
where a and b are variable names.
Assignment Statements:
Processing can be done using the assignment statement.
i.e. L.H.S = R.H.S
On the L.H.S is a variable.
While on the R.H.S is a variable or a constant or an expression.
• The value of the variable, constant or the expression on the R.H.S is
assigned in L.H.S.
The L.H.S and R.H.S should be of the same type. Here ‘ = ’ is
called assignment operator.
For example Let the variables be x, y. The product be z this can be
represented by as Read x, y
Z=x*y
Order in which the steps of an algorithm are executed is divided in to
3 types namely
i) Sequential Order
ii) Conditional Order
iii) Iterative Order
Sequential Order
Each step is performed in serial fashion i.e. in a step by step procedure
for example
Write an algorithm to add two numbers.
Step 1 : Start
Step 2 : Read a
Step 3 : Read b
Step 4 : Add a , b
Step 5 : Store in d
Step 6 : Print d
Step 7 : End
Conditional Order
Based on fact that the given condition is met or not the algorithm
selects the next step to do. If statements are used when decision has
to be made. Different format of if statements are available they are
a) Syntax : if (condition)
Then {set of statements S1}
• Here condition means Boolean expressions which is TRUE or
FALSE.
• If condition is TRUE then the statements S1 is evaluated.
• If FALSE S1 is not evaluated Programme skips that section.
• For example
Write an algorithm to check equality of numbers.
Step 1 : Start
Step 2 : Read a, b
Step 3 : if a = =b, print numbers are equal to each other
Step 4 : End
b) if else (condition) statement:
if (condition)
Then {set of statements S1}
else
Then {set of statements S2}
Here if condition evaluates to true then S1 is executed otherwise
else statements are executed. For example Write an algorithm to
print the grade.
Step 1 : Start
Step 2 : Read marks
Step 3 : if marks greater than 60 is TRUE
print ’GRADE A’
Step 4 : Other wise
print ’GRADE B’
Step 5 : End
iii) Iterative Order
Here algorithm repeats the finite number of steps over and
over till the condition is not meet. Iterative operation is also
called as looping operation. For example
Add ‘n’ natural numbers till the sum is 5.
Step 1 : Start
Step 2 : set count to 0 and i=1
Step 3 : add i to count
Step 4 : i=i+1
Step 5 : if count is less than 5, then repeat steps 3 & 4
Step 6 : otherwise print count
Step 7 : End
The range of inputs for which an algorithm works has to be specified
carefully.
– The same algorithm can be represented in several different ways.
– Several algorithms for solving the same problem may exist.
– Algorithms for the same problem can be based on very different
ideas and can solve the problem with dramatically different speeds.
Every problem as we understand can be solved using different
methods or techniques.
Thus each method may be represented using an algorithm.
The important question to answer is How to choose the best algorithm?
• Once we develop an algorithm, it is always better to check whether
the algorithm is efficient or not. The efficiency of an algorithm
depends on the following factors:
1. Accuracy of the output
2. Robustness of the algorithm
3. User friendliness of the algorithm
4. Time required to run the algorithm
5. Space required to run the algorithm
The main aim of using a computer is to transform data from one
form to another. The algorithm describes the process of
transforming data
Efficiency of algorithms depends upon the data structures that are
selected for data representation.
The data structure has to be finally represented in the memory.
This is called as memory representation of data structures. While
selecting the memory representation of data structures it should
worth memory space and it should also be easy to access.
COMPLEXITY OF ALGORITHM
• Every algorithm we write should be analyzed before it is implemented
as a program.
• There are two main criteria's or reasons upon which we can judge an
algorithm. They are:
1. The correctness of the algorithm and
2. The simplicity of the algorithm
• The correctness of an algorithm can be analyzed by tracing the
algorithm with certain sample data and by trying to answer certain
questions such as.
1. Does the algorithm do what we want it to do?
2. Does it work correctly according to the original specifications of the task?
3. Does the algorithm work when the data structure used is full?
4. is there documentation that describes how to use it and how it works?
COMPLEXITY OF ALGORITHM
• In order to analyze this we will have to consider the time
requirements and the space requirements of the algorithm.
• These are the two parameters on which the efficiency of the
algorithms is measured.
• Space requirements are not a major problem today because
memory is very cheap. So time is the only criteria for
measuring efficiency of the algorithm as we have to maximize
the utilization of the CPU and response time to be minimized
to be faster.
Space complexity: The Space complexity of an algorithm
is the amount of main memory needed to run the program
till completion. Consider the following algorithms for
exchange two numbers:
• The first algorithm uses three variables a, b and tmp and the
second one take only two variables, so if we look from the
space complexity perspective the second algorithm is better
than the first one.
TIME COMPLEXITY
• The Time complexity of an algorithm is the amount of computer
time it needs to run the program till completion.
• To measure the time complexity in absolute time unit has the
following problems.
1. The time required for an algorithm depends on number of
instructions executed by the algorithm.
2. The execution time of an instruction depends on computer’s power.
Since, different computers take different amount of time for the
same instruction.
3. Different types of instructions take different amount of time on
same computer.
• Consider another algorithm for add n even number
• As we can observe in the above table that with the increase in the
value of n, the running time of 5n2 increases while the running time
of 6n and 12 also decreases. Therefore, it is observed that for
larger values of n, the squared term consumes almost 99% of the
time. As the n2 term is contributing most of the time, so we can
eliminate the rest two terms.
• Therefore,
• f(n) = 5n2
• Here, we are getting the approximate time complexity whose
result is very close to the actual result. And this approximate
measure of time complexity is known as an Asymptotic complexity.
Here, we are not calculating the exact running time, we are
eliminating the unnecessary terms, and we are just considering the
term which is taking most of the time.
• In mathematical analysis, asymptotic analysis of algorithm is a
method of defining the mathematical boundation of its run-time
performance. Using the asymptotic analysis, we can easily
conclude the average-case, best-case and worst-case scenario of an
algorithm.
• Order of growth: Order of growth is how the time of execution
depends on the length of the input. In the above example, we can
clearly see that the time of execution is linearly depends on the
length of the array.
• Order of growth will help us to compute the running time with ease.
We will ignore the lower order terms, since the lower order terms
are relatively insignificant for large input. We use different notation
to describe limiting behavior of a function.
Worst Case Analysis
• In the worst case analysis, we calculate upper bound on running time
of an algorithm.
• In that causes maximum number of operations to be executed.
• It defines the input for which the algorithm takes a huge time.
• For Linear Search, the worst case happens when the element to be
searched is not present in the array. When x is not present, the search
() functions compares it with all the elements of array [] one by one.
Therefore, the worst case time complexity of linear search would be.
.
Average Case Analysis
• In average case analysis, we take all possible inputs and
calculate computing time for all of the inputs. Sum all the
calculated values and divide the sum by total number of inputs.
• It takes average time for the program execution.
Best Case Analysis
• In the best case analysis, we calculate lower bound on running
time of an algorithm. We must know the case that causes
minimum number of operations to be executed.
• It defines the input for which the algorithm takes the lowest time
• In the linear search problem, the best case occurs when x is
present at the first location. The number of operations in worst
case is constant (not dependent on n). So time complexity in
the best case would be.
Asymptotic Analysis(O, Ω, ϴ) :
• When we calculate the complexity of an algorithm we often get
a complex polynomial. For simplify this complex polynomial we
use some notation to represent the complexity of an algorithm
call Asymptotic Notation. The function f and g are non
negative functions.
– Big oh Notation (O)
– Omega Notation (Ω)
– Theta Notation (θ)
Big oh Notation (O):
This notation provides an upper bound on a function which ensures that
the function never grows faster than the upper bound. So, it gives the
least upper bound on a function so that the function never grows faster than
this upper bound.
If f(n) and g(n) are the two functions defined for positive integers,
then f(n) = O(g(n)) as f(n) is big oh of g(n) or f(n) is on the order of g(n)) if
there exists constants c and no such that:
f(n)≤c.g(n) for all n≥no
This implies that f(n) does not grow faster than g(n), or g(n) is an upper
bound on the function f(n). In this case, we are calculating the growth rate
of the function which eventually calculates the worst time complexity
of a function, i.e., how worst an algorithm can perform.
Omega Notation (Ω)
1.It basically describes the best-case scenario which is
opposite to the big o notation.
2.It is the formal way to represent the lower bound of an
algorithm's running time. It measures the best amount of time
an algorithm can possibly take to complete or the best-case
time complexity.
3.It determines what is the fastest time that an algorithm can
run.
If f(n) and g(n) are the two functions defined for positive
integers,
then f(n) = Ω (g(n)) as f(n) is Omega of g(n) or f(n) is on the
order of g(n)) if there exists constants c and no such that:
f(n)>=c.g(n) for all n≥no and c>0
Theta Notation (θ)
• The theta notation mainly describes the average case scenarios.
• It represents the realistic time complexity of an algorithm. Every time,
an algorithm does not perform worst or best, in real-world problems,
algorithms mainly fluctuate between the worst-case and best-case,
and this gives us the average case of the algorithm.
• Let f(n) and g(n) be the functions of n where n is the steps required to
execute the program then:
f(n)= θg(n)
The above condition is satisfied only if when
c1.g(n)<=f(n)<=c2.g(n)
The common name of few order notations is listed below:
Name Notation Example Algorithms
Logarithmic O(log n) Binary Search
• Therefore, 0 ≤ c ≤ 5. Hence, c = 5
• Now to determine the value of n 0
• 0 ≤ 5 ≤ 5 + 10/n0
• –5 ≤ 5 – 5 ≤ 5 + 10/n0 – 5
• –5 ≤ 0 ≤ 10/n0
• So n 0 = 1 asli m 1/n = 0
n
#include <stdio.h>
int main( )
{
int a = 5;
int *b;
b = &a;
printf ("value of a = %d\n", a);
printf ("value of a = %d\n", *(&a));
printf ("value of a = %d\n", *b); OUTPUT:
printf ("address of a = %u\n", &a); value of a = 5
printf ("address of a = %p\n", b); value of a = 5
printf ("value of b = address of a = %x\n", value of a = 5
b); address of a = 6487580
address of a = 000000000062FE1C
printf ("address of b = %X", &b);
value of b = address of a = 62fe1c
return 0; address of b = 62FE10
}
INITIALIZATION OF POINTER VARIABLE:-
The process of assigning the address of a variable to a pointer variable
is known as initialization. Once a pointer variable has been declare we
can use the assignment operator to initialize the variable. EXAMPLE:
int quantity;
int *p;
#include<stdio.h>
int main() p= &quantity;
{ int a;
int *ptr;
a = 10;
ptr = &a;
printf(“Value of ptr:%u”, ptr);
return (0);
}
Dereferencing Pointer:
• Dereferencing is an operation performed to access and manipulates data
contained in the memory location pointed to by a pointer.
• The operator * is used to dereference pointers. A pointer variable is
dereference when the unary operator *, in this case called the indirection
operator, is used as a prefix to the pointer variable.
#include<stdio.h>
int main()
{
int *p, v1, v2;
p=&v1;
*p=25;
*p+=10;
printf(" value of v1= %d\n", v1);
v2=*p; output
printf("value of v2= %d\n", v2); value of v1= 35
p=&v2; value of v2= 35
*p+=20; now the value of v2=55
printf(" now the value of v2=%d \n", v2);
}
Accessing the Address of a Variable through its Pointer:-
• Once a pointer has been assigned the address of variable, the question is how to
access a value of a variable through its pointer? This is done by using another
unary operator *(asterisk),Consider the following statements.
int ds, *p, n;
ds = 133;
p = &ds;
n = *p;
• Here we can seen that ds and n declare as integer and p pointer variable that point
to an integer. When the operator * is placed before the pointer variable in an
expression, the pointer returns the values of variable of which the pointer value is
the address. In this case, *p returns the value of variable ds. Thus the value of n
would be 133. The two statements
p =&ds;
n = *p; are equivalent to
n = *&ds; which is turn is equivalent to n = ds;
int main()
{ int *p, q;
q = 19;
p = &q; /* assign p the address of q */
printf(" Vlaue of q=%d\n",q);
printf("Contents of p=%d\n ", *p);
printf("Address of q stored in p=%d ",p);
return 0;
}
OUTPUT:
Vlaue of q=19
Contents of p=19
Address of q stored in p=6487572
Pointer Arithmetic
• The size of data type which the pointer variable points to is the
number of bytes accessed in memory.
• The size of the pointer variable is dependent on the data type of the
variable pointed by the pointer.
• Some arithmetic operations can be performed with pointers.
• C language supports the flowing arithmetic operators which can be
performed with pointers .They are
1. Pointer increment(++)
2. Pointer decrement(- -)
3. addition (+)
4. subtraction (-)
5. Subtracting two pointers of the same type
6. Comparison of pointers
Pointer increment and decrement:-
• Integer, float, char, double data type pointers can be incremented and
decremented. For all these data types both prefix and post fix
increment or decrement is allowed.
• Integer pointers are incremented or decremented in the multiples of
two. Similarly character by one, float by four and double pointers by
eight etc For 32-bit machine.
• Note that pointer arithmetic cannot be performed on void pointers,
since they have no data type associated with them.
• Let int *x;
x++ /*valid*/
++x /*valid*/
x-- /*valid*/
--x /*valid*/
• The Rule to increment the pointer is given as
new_address= current_address + i * size_of(data type)
• Where i is the number by which the pointer get increased.
• For 32-bit int variable, it will be incremented by 2 bytes.
• For 64-bit int variable, it will be incremented by 4 bytes.
• The Rule to decrement the pointer is given as
new_address= current_address - i * size_of(data type)
Note: If we have more than one pointer pointing to the same location,
then the change made by one pointer will be the same as another
pointer.
void main()
{
int *p1,x;
float *f1,f;
char *c1,c;
p1=&x;
f1=&f;
c1=&c;
printf("Memory address before increment:\n int=%p\n,float=%p\n, char=%p\n",p1,f1
,c1);
p1++;
f1++;
c1++;
printf("Memory address after increment:\n int=%p\n, float=%p\n,char=%p\n",p1,f1,
c1);
}
C Pointer Addition
• We can add a value to the pointer variable. The formula of adding
value to pointer is given
new_address= current_address + (number * size_of(data type))
#include<stdio.h>
int main(){
int number=50;
int *p; //pointer to int
p=&number; //stores the address of number variable
printf("Address of p variable is %u \n", p);
p=p+3; //adding 3 to pointer variable
printf("After adding 3: Address of p variable is %u \n", p);
return 0;
Output }
• Here, the pointer variable p1 contains the address of the pointer variable p2.
This is known as multiple indirections.
• A variable that is a pointer to a pointer must be declared using additional
indirection operator symbols in front of the name.
• The general syntax for declaring a pointer to pointer is:
<data type> **< pointer to pointer variable name>.
Example: int **p1;
int main ()
{
intvar;
int *ptr;
int **pptr;
var = 3000;
ptr = &var; /* take the address of var */
pptr = &ptr;/* take the address of ptr using address of operator &*/
printf("Value of var = %d\n", var );
printf("Value available at *ptr = %d\n", *ptr );
printf("Value available at **pptr = %d\n", **pptr); /* take the value using pptr */
}
OUTPUT:-
Value of var = 3000
Value available at *ptr = 3000
Value available at **pptr = 3000
4000 3000 2000
3000 2000 5
ppa pa a
int main()
{
char* arr[10] = {"Geek", "Geeks ", "Geekfor"};
print(arr, 3);
return 0;
}
C Array
• An array is defined as the collection of similar type of data items
stored at contiguous memory locations.
• Arrays are the derived data type in C programming language
which can store the primitive type of data such as int, char, double,
float, etc.
• It also has the capability to store the collection of derived data
types, such as pointers, structure, etc.
• The array is the simplest data structure where each data element
can be randomly accessed by using its index number. example
Declaration One Dimensional Array: arrays must be declared
explicitly before they are used. The general form of declaration is:
Data_type Array_Name[Array_Size];
int marks[5];
• Here, int is the data_type, marks are the array_name, and 5 is
the array_size.
Initialization of One Dimensional Array
• Once an array is declared, it must be initialized. Otherwise array
will contain the garbage values. There are two different ways in
which we can initialize the static array
1. Compile time
2. Run time
• Compile Time Initialization
1. Initializing Arrays during Declaration
data-type array-name[size] = { list of values separated by comma };
For example:
int marks[5]={90, 82, 78, 95, 88};
• Note that if the number of values provided is less than the number of
elements in the array, the un-assigned elements are filled with zeros.
#include <stdio.h>
int main()
{ int arr1[]={1,2,3,4,5};
int arr2[]={0,2,4,6,8};
int arr3[]={1,3,5,7,9};
int *parr[3] = {arr1, arr2, arr3};
int i;
for(i = 0;i<3;i++)
printf(" %d", *parr[i]);
return 0;
} Output 1 0 1
• parr[0] stores the base address of arr1 (or, &arr1[0]). So writing *parr[0]
will print the value stored at &arr1[0]. Same is the case with *parr[1] and
*parr[2]
Pointer to an Array:
• We have seen that a pointer that pointed to the first element of array. We
can also declare a pointer that can point to the whole array.
• For example int (*ptr)[10]; here ptr is point to an array of 10 integer.
• Note that it is necessary to enclose the pointer name inside parentheses.
• We know that the pointer arithmetic is performed relative to the base size,
so if we write ptr++, then the pointer ptr will be shifted forward by 20 bytes.
#include <stdio.h>
int main()
{
int arr[] = { 3, 5, 6, 7, 9 };
int *p = arr;
int (*ptr)[5] = &arr;
printf("p = %p, ptr = %p\n", p, ptr);
printf("*p = %d, *ptr = %p *ptr[0]=%d\n", *p, *ptr, *ptr[0]);
printf("sizeof(p) = %lu, sizeof(*p) = %lu\n", sizeof(p), sizeof(*p));
return 0;
} Output: p = 000000000062FDF0, ptr = 000000000062FDF0
*p = 3, *ptr = 000000000062FDF0 *ptr[0]=3
sizeof(p) = 8, sizeof(*p) = 4
#include <conio.h>
int main()
{ int *p;
int x[5];
int (*ptr)[5];
p=x; // Points to 0th element of the x.
ptr=&x; // Points to the whole array of x.
printf(" p=%u, ptr=%d\n", p, ptr);
p++;
ptr++;
printf(" p=%u, ptr=%d\n", p, ptr);
}
Output:
P=3000, pa=3000
P=3002, pa=3010
int main()
{
int(*a)[5]; // Pointer to an array of five numbers
int b[5] = { 1, 2, 3, 4, 5 };
int i = 0;
a = &b; // Points to the whole array b
for (i = 0; i < 5; i++)
printf("%d\n", *(*a + i));
return 0;
}
Output
1
2
3
4
5
Multi Dimensional Array: It can be defined as ‘an Array which holds
more than one subscript is known as Multi Dimensional Array.
Generally 2-D array is called as Matrix. It is more suitable for the
processing of table and matrix manipulations. C language allow to
programmers to use arrays with more than two dimensions. On
behalf of this it can be divided into two sub section:
Two Dimensional Array
N-Dimensional Array
Data_Type Array_Name[row size][column size];
For Example: int x[3][4];
Where int is the type of the array, x is the array name and row size is the
number of rows and column size the number of column. To find the
total number of element of an array, multiply the total number of row
with the total number of column.
For Example: int a[3][4];
• The computer stores the base address, and the address of the other
elements is calculated using the following formula.
• if the array elements are stored in row major order,
• The computer stores the base address, and the address of the other
elements is calculated using the following formula.
• If the array elements are stored in column major order,
Address(A[I][J]) = Base_Address + w{(I – 1) + M ( J – 1) }
• where w is the number of bytes required to store one element, M is
the number of rows, and I and J are the subscripts of the array
element.
• Example Consider a 20 * 5 two-dimensional array marks which has
its base address = 1000 and the size of an element = 2. Now
compute the address of the element, marks[18][ 4] assuming that the
elements are stored in row major order.
• Solution
Address(A[I][J]) = Base_Address + w{N (I – 1) + (J – 1)}
Address(marks[18][4]) = 1000 + 2 {5(18 – 1) + (4 – 1)}
= 1000 + 2 {5(17) + 3}
= 1000 + 2 (88)
= 1000 + 176
= 1176
Initializing Two-Dimensional Arrays:
Multidimensional arrays may be initialized by specifying
bracketed values for each row. Following is an array with
3 rows and each row has 4 columns.
#include <stdio.h>
int main ()
{
int a[5][2] = { {0,0}, {1,2}, {2,4}, {3,6},{4,8}}; /* an array with 5 rows and 2 columns*/
int i, j;
/* output each array element's value */
for ( i = 0; i < 5; i++ )
{
for ( j = 0; j < 2; j++ )
{
printf("a[%d][%d] = %d\n", i, j, a[i][j] );
}
}
return 0;
}
Write a program to print a matrix of order m * n where
void main( )
{ clrscr( );
int mat[50][50] i, j, m, n;
printf(“Enter the number of Rows”);
scanf(“%d”,&m);
printf(“Enter the number of Columns”);
scanf(“%d”,&n);
printf(“Enter the element for the Matrix”);
for(i=0;i<m;i++)
{
for(j=0;j<n;j++)
{
scanf(“%d”, &mat[i][j]);
}
}
printf(“The Matrix is:”);
for(i=0;i<m;i++)
{
for(j=0;j<n;j++)
{
printf(“\t%d”,mat[i][j]);
}
printf(“\n”);
}
getch( );
}
Array with more than two dimentions
• Three-D array as 2-D arrays. For example
int x[2][3][4];
This array consist of two 2-D arrays and each of those 2-D array has 3
rows and 4 columns.
[0] [1]
#include<stdio.h> printf("print the values in array: \n");
int i,j,k; //variables for nested for loops for(i=1;i<=2;i++)
int main() {
{ for(j=1;j<=3;j++)
int arr[2][3][4]; //array declaration {
printf("enter the values in the array: \n"); for(k=1;k<=4;k++)
for(i=1;i<=2;i++) //represents block {
{ printf("%d ",arr[i][j][k]);
for(j=1;j<=3;j++) //represents rows }
{ printf("\n");
for(k=1;k<=4;k++) //represents columns }
{ printf("\n");
printf("the value at arr[%d][%d][%d]: ", i, j, k }
); return 0;
scanf("%d", &arr[i][j][k]); }
}
}
}
Pointer to Two dimensional Array:
• In two dimensional arrays, we can access each element by using two
subscripts, where first subscript represents row number and second
subscript represents the column number.
• A two dimensional array is of form, x[i][j]. Lets see how we can make
a pointer point to such an array. As we know, name of the array gives
its base address.
• In x[i][j], x will give the base address of this array, that is the address
of x[0][0] element.
• In the case of 2-D arrays, first element is a 1-D array, so the name of
2-D array represents a pointer to 1-D array. x[i] or (x+i) or *(x+i)
point to ith row of 1-D array of the base address where x is two
dimension array.
• Individual elements of the array mat can be accessed using either:
• x[i][j] or *(*(x + i) + j) or *(x[i]+j);
• Since memory in a computer is organized linearly it is not possible to store
the 2-D array in rows and columns. The concept of rows and columns is
only theoretical, actually, a 2-D array is stored in row-major order i.e rows
are placed next to each other. The following figure shows how the above
2-D array will be stored in memory.
*(*(arr+2)+3)
*(arr+2) *(arr+2)+3
#include<stdio.h>
int main()
{
int arr[3][4] = {{10, 11, 12, 13}, {20, 21, 22, 23}, {30, 31, 32, 33} };
int (*ptr)[4];
ptr = arr;
printf("%p %p %p\n", ptr, ptr + 1, ptr + 2);
printf("%p %p %p\n", *ptr, *(ptr + 1), *(ptr + 2));
printf("%d %d %d\n", **ptr, *(*(ptr + 1) + 2), *(*(ptr + 2) + 3));
printf("%d %d %d\n", ptr[0][0], ptr[1][2], ptr[2][3]);
return 0;
}
Output:
0x7ffead967560 0x7ffead967570 0x7ffead967580
0x7ffead967560 0x7ffead967570 0x7ffead967580
10 22 33
10 22 33
main( )
{
int s[4][2] = {{ 12, 56 }, { 12, 33 }, { 14, 80 }, { 13, 78 }} ;
int i, j ;
for ( i = 0 ; i <= 3 ; i++ )
{
printf("\n Address of %d th row one D array=%u %u\n", i, s[i], *(s+i));
printf ( "\n" ) ;
for ( j = 0 ; j <= 1 ; j++ )
printf ( "%d ", *( *( s + i ) + j ) ) ;
}}
Output:
Address of 0th row one D array=2293400 229340012 56
Address of 1th row one D array=2293408 229340812 33
Address of 2th row one D array=2293416 229341614 80
Address of 3th row one D array=2293424 229342413 78
• The golden rule to access an element of a two-dimensional array can
be given as arr[i][j] = (*(arr+i))[j] = *((*arr+i))+j) = *(arr[i]+j)
• Therefore, arr[0][0] = *(arr)[0] = *((*arr)+0) = *(arr[0]+0)
• beg = 0
• end = 8
• mid = (0 + 8)/2 = 4. So, 4 is the mid of the array.
Binary Search complexity