0% found this document useful (0 votes)
11 views

Data Structure Chapter 1

The document discusses different types of data structures including linear, non-linear, static, dynamic, homogeneous, non-homogeneous, primitive, non-primitive, physical and logical data structures. It provides descriptions and examples of common data structures like arrays, stacks, queues, linked lists, trees and graphs.

Uploaded by

Muhammad Fayaz
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views

Data Structure Chapter 1

The document discusses different types of data structures including linear, non-linear, static, dynamic, homogeneous, non-homogeneous, primitive, non-primitive, physical and logical data structures. It provides descriptions and examples of common data structures like arrays, stacks, queues, linked lists, trees and graphs.

Uploaded by

Muhammad Fayaz
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 12

Unit 1 Introduction to Data Structures and Algorithm

Data
▪ The data represent quantities, characters, or symbols on which operations are performed
by a computer.
▪ Simply, we can say that the quantities, characters or symbols operated by a computer is
called data.
▪ Data is the basic entity or fact that is used in a calculation or manipulation process.
▪ It can exist in various forms, including numbers, text, images, audio, or video.

 Data Vs. Information


▪ Data are raw facts without context, whereas Information is data with context.
▪ Data requires interpretation to become an Information. So, the processed form of data is
called information.
▪ Data can be any character, text, words, number, pictures, sound, or video and if not put into
▪ context means nothing to a human or computer.
▪ For example, 10409 is a data, whereas information may be Rs. 10409 is the salary of
someone.

Data Structure
▪ Data structure is the way of organizing and storing data in a computer system so that it can
be accessed and used efficiently.
OR
▪ A data structure is a format for organizing, processing, retrieving and storing data so it can
be easily accessed and effectively used.
▪ Data structures make it easy for users to access and work with the data they need in
appropriate ways.
▪ Selecting the correct data structure enables us to efficiently perform various important
operations while effectively managing both memory usage and execution time.
▪ Examples of data structures includes Array, Stack, Queue, Linked List, Tree, Graph etc.
Importance of Data Structure
▪ The main objective of a data structure is to store, retrieve, and update the data efficiently.
▪ Data structures are used in almost every program and software system.
▪ Data structures are the building blocks of a program. For a program to run efficiently, a
programmer must choose appropriate data structures.
1|Page
▪ The data structures form the foundation of computer programming and enable more effective
and efficient solutions.
▪ Therefore, data structures are important for developing high-quality programs and software
systems.
Types of Data Structures
Data Structure is divided into following important types.
1. Linear Data Structure
▪ A linear data structure is one in which the data elements are stored in a linear, or
sequential, order; that is, data is stored in consecutive memory locations.
▪ It includes the data at a single level such that we can traverse all data into a single run.
▪ Examples of linear data structures include arrays, linked lists, stacks, queues, and so on.
11 12 13 14
(Linear Data Structure)
2. Non-Linear Data Structure
▪ A non-linear data structure is one in which the data is not stored in any sequential order or
consecutive memory locations.
▪ The data elements in this structure are represented by a hierarchical order.
▪ There are multiple levels of nonlinear data structures. So, It is not easy to traverse the non-
linear data structures in a single run.
▪ Examples of non-linear data structure include Graph and Tree.

Non-linear Data Structure -Tree


3. Static Data Structure
▪ The type of data structure which has a fixed size and this size can’t be changes during the
running of program, is called static data structure.
▪ The static data structure is a kind of data structure, in which once memory space is
allocated it cannot extend, i.e. the memory allocation for the data structure takes place
before run time and cannot be changed afterwards.
▪ Example: Array
4. Dynamic Data Structure
▪ Dynamic Data Structure is another kind of data structure, whose size is not fixed and can
be can be extended or shrink during the execution.

2|Page
▪ The memory allocation as well as memory de-allocation for the data structure takes place at
run-time and allocates memory as required amount at any time.
▪ Example: linked list
5. Homogeneous Data Structure
▪ A homogeneous data structure is one that contains data elements of the same type, for
example, Arrays.
11 12 13 14
In above array, all the data items are of the same type i.e. integers

6. Non-Homogeneous
▪ A non-homogeneous data structure contains data elements of different types, for example,
structures.
▪ It is also called Heterogeneous data structure.

7. Primitive Data Structure


▪ Primitive data structures are the fundamental data structures or predefined data structures
which are supported by a programming language.
▪ Examples of primitive data structure types are integer, long, float, double, char etc.
8. Non-Primitive Data Structure
▪ Non-primitive data structures are comparatively more complicated data structures that are
created using primitive data structures.
▪ Examples of nonprimitive data structures are arrays, linked lists, stacks, queues, and so on.
9. Physical Data Structure
▪ Physical data structure refers to how data is stored in memory or on storage devices.
▪ It deals with the actual layout and organization of data in terms of memory addresses,
storage locations, and data representation.
▪ Physical data structures are concerned with the low-level implementation details.
10. Logical Data Structure
▪ Logical Data Structure refers to how data is perceived and accessed by users or programs.
▪ It defines the logical relationships and operations performed on the data, without
considering the underlying physical storage details.
▪ Logical data structures focus on the abstract representation of data and the operations that
can be performed on it.

3|Page
Description of various Data Structures
1. Array
▪ An array is a collection of similar type data elements stored at consecutive locations in the
memory.
▪ It is a linear data structures that stores data items of same type in linear manner.
▪ The group of array elements is referred with a common name called array name.
▪ Access to individual array elements is provided with the help of a number (integer value)
called index or subscript.
▪ In C/C++, array index starts with 0.
▪ Example: The representation of array ‘A’ having five elements is given below:

▪ In above array A[3] refers to the 3rd element (i.e., 14) of the array A.
2. Stack
▪ Stack is a linear data structure in which the insertion and deletion operations are performed
at only one end. This end is referred as top of the stack.
▪ Stack is based on the Last-In-First-Out (LIFO) principle, which means the element that is
last added to the stack is the one that is first removed from the stack.
▪ In a computer’s memory stacks can be implemented using arrays or linked lists.
▪ A real-life example of a stack is if there is a collection of books placed on a table.

Stack of Books
3. Queue
▪ Queue is a linear data structure in which the insertion and deletion operations are performed at
two different ends.
▪ The insertion is performed at one end (called Rear) and deletion is performed at another end
(called Front).
▪ Queue is based on the First-In-First-Out (FIFO) principle, which means the element that is
first added to the queue is also the one that is first removed from the queue.
▪ A queue of people standing at a bus stop can be considered similar to a queue data
structure.
▪ Representation of Queue:

4|Page
4. Linked List
▪ A Linked list is a linear collection of data elements (called nodes) connected together
through pointers.
▪ Each node comprises of two parts. One part contains the data value while the other part
contains a pointer to the next node in the list.
Node
▪ The nodes of linked list are scattered in memory but linked with each other in linear manner
(one node is connected to the next node).
▪ Linked list is a dynamic data structure, this means memory is allocated as and when
required.
▪ The starting node is called “front” or “head node”, which contains the reference (Address) to
the 1st node while the link field of last node contains NULL.
▪ A real-life example of a linked list can be seen in a train.

(Linked List) (A train, as a real-life example of linked list)

5. Tree
▪ Tree is a non-linear data structure that arranges its data elements (i.e., nodes) in the form
of a hierarchical structure (having one to many relationship).
▪ The node present at the top of the tree structure is referred as root node.
▪ Each node comprises of zero or more child nodes.
▪ The node having child nodes is called parent node while the node having no child is called
leaf nodes.
▪ The binary tree is the simplest form of the tree, in which every parent node (including root)
has maximum of two children.

5|Page
6. Graph
▪ Graph is a linked data structure that comprises of a group of vertices called nodes and a
group of edges.
▪ In graph the relationship among notes may be many to many.
▪ A graph G may be defined as a finite set of V vertices and E edges.
▪ Therefore, a graph G can be represented as G = (V, E) where V is the set of vertices and E
is the set of edges.

Operations on Data Structures


There are several common operations associated with data structures that are used for
manipulating the stored data. Some of them are:
1. Insertion: It is the process of adding a new record in to a data structure.
2. Deletion: It is the process of removing an existing record from a data structure.
3. Traversing: It is the process of accessing each record of a data structure exactly once.
4. Searching: It is the process of identifying the location of a record that contains a specific key
value.
5. Sorting: It is the process of arranging the records of a data structure in a specific order, such
as alphabetical, ascending, or descending.
6. Merging: It is the process of combining the records of two different sorted data structures to
produce a single sorted data structure.
 Algorithm
▪ An algorithm can be defined as a step by step procedure that provides solution to a given
problem.
▪ It comprises of a well-defined set of finite number of steps or rules that are executed
sequentially to obtain the desired solution.
▪ Examples of Algorithm:

1. Algorithm for Preparing Tea 2. Algorithm: Addition


Step1: Fill up the kettle with water. This Algorithm is used to add three
Step 2: Add some sugar and tea powder numbers.
Step 3: Boil the mixture Step1: Read: A, B, C
Step 4: Add some milk Step2: Sum=A+B+C
Step 5: Boil again for 2 mint. Step3: Print: Sum
Step 6: Tea is ready Step4: Exit

6|Page
 Characteristics of Algorithm

There are certain key characteristics that an algorithm must possess. These characteristics
are:
1. An algorithm must comprise of a finite number of steps.
2. It should have zero or more valid and clearly defined input values.
3. It should be able to generate at least a single valid output based on a valid input.
4. It must be definite, i.e., each instruction in the algorithm should be defined clearly.
5. It should be correct, i.e., it should be able to perform the desired task of generating correct
output from the given input.
6. There should be no ambiguity regarding the order of execution of algorithm steps.
7. It should be able to terminate on its own, i.e., it should not go into an infinite loop.

Writing Algorithms
Following are some of the general conventions that are followed while writing algorithms:

1. Provide a valid name for the algorithm.


2. Write the purpose (introductory comment) of algorithm.
3. Write the specific steps which finally leads to the solution to the given problem.
General convention for writing steps:
▪ Use INPUT/READ and PRINT/WRITE instructions to specify input and output
operations respectively.
Example: INPUT: A, B, C , PRINT: SUM
▪ Use SET instruction for assigning value to a variable. Example: SET: SUM=A+B+C
Use if or if else constructs for conditional statements. You must end an if statement
with the corresponding end if statement. The general format of these instructions
are:

▪ For looping or iterative statements, you can use for or while looping constructs. A for
loop must end with an end for statement while a while loop must end with an end
while statement, as depicted below:

7|Page
˖ Some important algorithms

1. Write an algorithm to accept two numbers and find the Maximum.

Algorithm: MAXIMUM
This algorithm is used to find the maximum number between two input numbers.
Steps:
1. Read: A, B
2. If (A>B) then
Write: ‘maximum number is’, A
Else
Write: ‘maximum number is’, B
[endif]
3. Exit

2. Write an algorithm that accept a number and find the FACTORIAL.


Algorithm: FACTORIAL
This algorithm is used to find the FACTORIAL of a number. N is the number and Fact
represents the factorial value, ݅ is the counter variable.
Steps:
1. Read: N
2. Set Fact=1
3. Repeat step 4 for ݅=1 to N by 1
4. Fact=Fact*݅
[endfor]
5. Write: ‘Factorial=’, Fact
6. Exit

8|Page
3. Write an algorithm that find the largest element in a given array.

Algorithm: LARGEST
This algorithm is used to find the largest item in the given array of size N.
Steps:
1. Set largest = A[0]
2. Set i = 1
3. Repeat step 4 and 5 While (i < N)
4. If A[i] > largest,
5. set largest = A[i]
6. Set: i=i+1
[endwhile]
7. Write: ‘largest=’, largest
8. Exit/End

z Analysis of Algorithm

Analysis of algorithms is the process of studying the performance and efficiency of an algorithm in
terms of its time and space complexity. It involves evaluating the behaviour of an algorithm as the
size of the input data grows, and identifying the best algorithm to solve a particular problem based
on its performance.

By analyzing the algorithm, we can estimate how much time and space it will take to execute for
different input sizes. This information is essential when we want to compare different algorithms,
choose the most appropriate algorithm for a particular problem, or optimize an existing algorithm.

The primary goal of analyzing an algorithm is to determine its time complexity and space
complexity.

Time Complexity – The time complexity of an algorithm is the amount of time or number of
operations, required by an algorithm to run the program completely. It is the running time of the
program. The time complexity of an algorithm depends upon the input size.

Space Complexity – The space complexity of an algorithm is the amount of memory space
required to run the program completely. The space complexity of an algorithm depends upon the
input size.

z Categories of Time Complexity:

Time complexity is categorized into three types:

9|Page
Best-case complexity: This refers to the minimum time required by an algorithm to solve a
problem or complete a task, assuming that the input data is in the best possible form. The best-
case time complexity is often not very useful in practice since it does not represent the typical
case.

Average-case complexity: This refers to the expected time required by an algorithm to solve a
problem or complete a task, assuming that the input data is randomly distributed. In other words, it
represents the time taken by the algorithm when the input data is neither in the best-case nor the
worst-case form, but somewhere in between.

Worst-case complexity: This refers to the maximum time required by an algorithm to solve a
problem or complete a task, assuming that the input data is in the worst possible form. The worst-
case time complexity is often used to represent the performance characteristics of an algorithm
since it provides an upper bound on the time required by the algorithm.

z Complexity of an Algorithm

Algorithm complexity, refers to the mathematical notation used to describe the growth rate of an
algorithm's resource requirements as the input size increases. Algorithm complexity provides a
way of expressing those characteristics in a concise and formal way.
Complexity is often expressed using the big O notation, that tells us how much resource an
algorithm will require at maximum as the input size increases.

The following are the meanings of some common big O notations:

O(1): This notation represents an algorithm that has a constant time complexity. This means that
the algorithm takes the same amount of time to complete, regardless of the size of the input.

O(log n): This notation represents an algorithm that has a logarithmic time complexity. This
means that the time required to complete the algorithm increases logarithmically as the size of the
input increases.

O(n): This notation represents an algorithm that has a linear time complexity. This means that the
time required to complete the algorithm increases linearly with the size of the input.

O(n log n): This notation represents an algorithm that has a quasilinear time complexity. This
means that the time required to complete the algorithm increases almost linearly, but with a slight
curve, as the size of the input increases.

O(n^2): This notation represents an algorithm that has a quadratic time complexity. This means
that the time required to complete the algorithm increases exponentially with the size of the input.

10 | P a g e
O(2^n): This notation represents an algorithm that has an exponential time complexity. This
means that the time required to complete the algorithm increases exponentially with the size of the
input, making it very inefficient for large inputs.

Example: Find the time complexity of the following algorithm.

1. Read: n
2. for k = 1 to n do by 1
k = k + 2;
write: k
3. Exit/End
Solution:
Program Statement Frequency Count
Read: N 1
for k = 1 to n by 1 do (n+1)
k=k+2 n
Write: k n
Exit/End 1
Total frequency count= 3n+3

So, the complexity of this algorithm is O(n).

Example 2: Find the time complexity of the following code.

Sol:
Program Program
Frequency Count Frequency Count
Statement Statement
i=0 1 j<n n(n+1)=n2+n
i<n n+1 j++ nxn=n2
i++ n c[i][j]=a[i][j]+b[i][j] nxn=n2
j=0 nx1=n
Total frequency count= 3n2+4n+2

If the constants are neglected the complexity will be O(n2).


11 | P a g e
z Development of an Algorithm
Following are the important steps of designing/developing algorithm:

1. Understand the Problem: Fully understand the problem statement or task you need to solve.

Understand the input, output, and constraints/rules.


2. Choose the Right Data Structure: Select the appropriate data structure(s) to solve the problem

efficiently. The choice depends on the nature of the problem and the operations you need to

perform.

3. Define Inputs and Outputs: Clearly define what your algorithm expects as input and what it

should produce as output. Consider edge cases and invalid inputs.

4. Design the Algorithm: Break down the problem into smaller, manageable subproblems.

5. Pseudo-code: Write pseudo-code to outline the steps of your algorithm. Pseudo-code should be

clear and understandable, focusing on the logic without getting bogged down in syntax.

6. Implement: Translate your pseudo-code into actual code using your chosen programming

language.

7. Test and Debug: Test your algorithm thoroughly with various test cases. Ensure that it produces

correct results and handles errors gracefully. Debug any issues that arise during testing.

8. Analyze Time and Space Complexity: Analyze the time and space complexity of your

algorithm to understand its efficiency. Consider optimizing the algorithm if necessary.


9. Optimize (if needed): If your algorithm isn't meeting performance requirements, consider

optimization techniques to reduce time or space complexity.

™™™™

12 | P a g e

You might also like