0% found this document useful (0 votes)
6 views

Data Structures[1]

The document provides an overview of data structures and algorithms, detailing their types, characteristics, and analysis methods. It covers linear and non-linear data structures, algorithm design techniques, time complexity, and space complexity, along with practical examples. Additionally, it discusses specific data structures like arrays, linked lists, stacks, queues, trees, and heaps, highlighting their operations, pros, and cons.

Uploaded by

cricketspremier
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views

Data Structures[1]

The document provides an overview of data structures and algorithms, detailing their types, characteristics, and analysis methods. It covers linear and non-linear data structures, algorithm design techniques, time complexity, and space complexity, along with practical examples. Additionally, it discusses specific data structures like arrays, linked lists, stacks, queues, trees, and heaps, highlighting their operations, pros, and cons.

Uploaded by

cricketspremier
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 28

Data Structures: Introduction and Types

Introduction: A data structure is a systematic way of organizing, managing, and storing data in a
computer so it can be accessed and modified efficiently. Data structures enable efficient data
processing, manipulation, and retrieval for various computational tasks.

Types of Data Structures:

1. Linear Data Structures:

o Elements are stored sequentially, and each element is connected to its previous
and next element.

o Examples:

 Arrays: A fixed-size collection of elements stored in contiguous memory


locations.

 Linked Lists: A collection of nodes where each node contains data and a
reference (or link) to the next node.

 Stacks: Follows the Last In, First Out (LIFO) principle.

 Queues: Follows the First In, First Out (FIFO) principle.

2. Non-Linear Data Structures:

o Elements are not arranged sequentially and may have multiple relationships.

o Examples:

 Trees: A hierarchical structure consisting of nodes, with each node having


one parent and potentially multiple children.

 Graphs: Consist of vertices (nodes) connected by edges (links),


representing relationships between pairs of elements.

Algorithms Analysis: Introduction and Key Concepts

Introduction: Algorithm analysis is the study of the efficiency of algorithms in terms of time and
space requirements. It helps determine the best way to solve problems with minimal resource
usage.

1. Priori Analysis: This refers to the theoretical analysis of an algorithm, i.e., determining its
efficiency before actual implementation by analyzing the algorithm’s structure and
behavior.

2. Posteriori Testing: This involves evaluating the algorithm’s performance after


implementation by running it on real data and measuring factors like execution time and
memory consumption.

Characteristics of Algorithms:

 Correctness: The algorithm must produce the correct output for all possible inputs.
 Efficiency: The algorithm should minimize resource usage, such as time and memory.

 Finiteness: The algorithm must terminate after a finite number of steps.

 Definiteness: Each step of the algorithm must be clearly defined.

 Input and Output: The algorithm should accept valid inputs and produce desired outputs.

Algorithm Design Techniques:

 Divide and Conquer: The problem is divided into smaller sub-problems, solved recursively,
and then combined.

 Greedy Method: Makes locally optimal choices at each stage with the hope of finding a
global optimum.

 Dynamic Programming: Breaks problems into overlapping subproblems and solves each
subproblem just once, storing the results.

 Backtracking: Explores possible solutions incrementally and abandons paths that don’t
lead to a solution.

Time Complexity: An Overview

Time complexity is a computational metric used to describe the amount of time an algorithm takes
to run as a function of the size of its input. It helps in determining the efficiency and scalability of an
algorithm. The larger the input size, the more time the algorithm may take to process it.

Why is Time Complexity Important?

 Comparing Algorithms: Helps compare different algorithms and choose the most efficient
one for large datasets.

 Scalability: Helps predict how an algorithm will perform as the input size grows.

 Optimization: Understanding time complexity can guide developers in optimizing their code.

How is Time Complexity Measured?

Time complexity is usually measured by counting the number of basic operations (like comparisons,
additions, multiplications, etc.) that an algorithm performs as a function of the input size nnn. It
focuses on the order of growth of the running time, ignoring lower-order terms and constant factors.

Big O Notation: Describing Time Complexity

The Big O notation is used to classify algorithms based on their worst-case or upper-bound
performance. It describes how the running time of an algorithm grows as the input size nnn
increases.

Common Time Complexities:

1. Constant Time – O(1):

o The running time of the algorithm is independent of the input size.


o Example: Accessing an element in an array by index.

2. Logarithmic Time – O(logn):

o The running time increases logarithmically with the input size.

o Example: Binary search on a sorted array.

3. Linear Time – O(n):

o The running time increases linearly with the input size.

o Example: Iterating through an array.

4. Linearithmic Time – O(nlogn):

o The running time is proportional to nnn times the logarithm of nnn.

o Example: Merge sort, quicksort in the average case.

5. Quadratic Time – O(n^2):

o The running time grows as the square of the input size.

o Example: Bubble sort, insertion sort.

6. Cubic Time – O(n^3):

o The running time grows as the cube of the input size.

o Example: Multiplying two n×nn \times nn×n matrices using a naive algorithm.

7. Exponential Time – O(2^n):

o The running time doubles with each addition to the input size.

o Example: Solving the traveling salesman problem with brute-force search.

8. Factorial Time – O(n!):

o The running time grows factorially with the input size.

o Example: Solving the traveling salesman problem using all permutations.

Worst Case, Best Case, and Average Case

 Worst Case: The maximum time an algorithm will take for any input of size nnn.

 Best Case: The minimum time an algorithm will take for any input of size nnn.

 Average Case: The expected running time of the algorithm for a typical input.

In most cases, we focus on worst-case time complexity as it provides a guaranteed upper bound on
performance.

Other Asymptotic Notations

Besides Big O, there are other notations used to analyze time complexity:

 Omega (Ω) Notation: Describes the best-case or lower-bound performance.


o Ω(f(n))means the algorithm takes at least f(n) time for large n.

 Theta (Θ) Notation: Describes the tight bound (both upper and lower bounds).

o Θ(f(n)) means the algorithm’s running time grows exactly as f(n) for large n.

 Little O (o) Notation: Describes an upper bound that is not tight (the algorithm performs
strictly better than this).

 Little Omega (ω) Notation: Describes a lower bound that is not tight (the algorithm performs
strictly worse than this).

Time Complexity Examples

1. Linear Search: Checking each element in a list to find a target value.

o Time Complexity: O(n) since in the worst case, the algorithm may need to inspect
every element.

2. Binary Search: Searching for an element in a sorted array by repeatedly dividing the search
interval in half.

o Time Complexity: O(logn) because each step cuts the problem size in half.

3. Merge Sort: Divides the array into halves and recursively sorts each half, then merges the
two halves.

o Time Complexity: O(nlogn), which is more efficient than quadratic algorithms like
bubble sort.

Space Complexity and Time-Space Tradeoff

Space complexity refers to the amount of memory an algorithm uses as a function of the input size
n. Similar to time complexity, it helps in determining how efficiently an algorithm utilizes memory
resources.

There is often a time-space tradeoff: improving the time efficiency of an algorithm may increase its
memory usage, and vice versa. For example, a dynamic programming algorithm might save time by
storing intermediate results, but this increases the memory footprint.

Time Complexity in Practice

When designing or analyzing algorithms, you will need to:

1. Identify the basic operation: The key operation whose frequency dominates the running
time (e.g., comparisons, swaps).

2. Determine input size: n typically refers to the number of elements in the input.

3. Evaluate performance: How does the number of basic operations grow with the size of the
input?

4. Ignore constants and lower-order terms: Focus on the dominant term (as nnn grows,
smaller terms and constants become negligible).
Key Techniques to Reduce Time Complexity

 Divide and Conquer: Break down the problem into smaller subproblems, solve them
recursively, and combine the results.

o Example: Merge Sort.

 Greedy Algorithms: Make local optimal choices to arrive at a global optimum.

o Example: Kruskal’s or Prim’s algorithm for finding a Minimum Spanning Tree (MST).

 Dynamic Programming: Store solutions to subproblems to avoid recomputing them.

o Example: Solving the Fibonacci sequence with memoization instead of recursion.

1. Arrays

Definition: An array is a linear data structure that stores elements of the same data type in
contiguous memory locations. Each element in an array can be accessed using an index, and the size
of the array is fixed once it is declared.

Key Features:

 Fixed Size: The size of an array must be defined at the time of its declaration and cannot be
changed later.

 Same Data Type: All elements in an array must be of the same type (e.g., all integers, all
floats).

 Indexed Access: Elements in an array can be accessed using an index starting from 0. For
example, the first element is at index 0, the second at index 1, and so on.

Operations on Arrays:

 Access: Accessing an element takes constant time, O(1), as you can directly use the index.

 Insertion: Inserting at a specific position requires shifting elements and thus takes O(n) time
in the worst case (for inserting at the beginning).

 Deletion: Similar to insertion, deleting an element may require shifting the remaining
elements, and it also takes O(n) time.

Pros:
 Simple to use and understand.

 Direct access to elements using indices makes certain operations (like reading) very fast.

Cons:

 Fixed size can lead to either wasted memory (if the array is too large) or memory overflow (if
the array is too small).

 Insertion and deletion can be slow, especially for large arrays, as elements need to be
shifted.

2. Linked Lists

Definition: A linked list is a linear data structure where elements are stored in nodes. Each node
contains two parts: the data and a reference (or pointer) to the next node in the sequence. Unlike
arrays, linked lists are dynamic, meaning they can grow or shrink during execution.

Types of Linked Lists:

1. Singly Linked List: Each node points to the next node. The last node points to NULL.

2. Doubly Linked List: Each node has two references: one pointing to the next node and
another pointing to the previous node.

3. Circular Linked List: The last node points back to the first node, forming a circle.

Operations on Linked Lists:

 Access: Accessing an element requires traversing the list from the head (start) to the desired
node, so it takes O(n) time.

 Insertion: Insertion at the beginning or end of the list is fast (O(1)), but inserting in the
middle requires traversing the list to the desired position (O(n)).

 Deletion: Similar to insertion, deleting the first element is O(1), but deleting an element from
the middle or end requires traversal (O(n)).

Pros:

 Dynamic size: The list can grow or shrink as needed during runtime.

 Efficient insertion and deletion: Inserting or deleting nodes is faster compared to arrays,
especially at the beginning or end.
Cons:

 Access is slower: Unlike arrays, linked lists do not allow direct access to elements by index.
You must traverse the list, which takes O(n) time.

 Extra memory: Each node requires additional memory for the pointer/reference to the next
(or previous) node.

3. Stacks

Definition: A stack is a linear data structure that follows the LIFO (Last In, First Out) principle. This
means that the last element inserted into the stack will be the first one to be removed.

Key Operations:

 Push: Add an element to the top of the stack.

 Pop: Remove the element from the top of the stack.

 Peek (or Top): View the element at the top of the stack without removing it.

 isEmpty: Check whether the stack is empty.

Example:

A stack can be visualized as a stack of plates. The last plate placed on top is the first one you remove.

Implementation:

Stacks can be implemented using arrays or linked lists.


Operations on Stacks:

 Push: Add an element to the top of the stack. This operation is O(1).

 Pop: Remove the top element. This is also O(1).

 Peek: View the top element without removing it. This operation is O(1).

Use Cases of Stacks:

 Function Call Stack: When a function calls another function, the current function is "pushed"
onto the call stack. When the called function finishes, it is "popped" off.

 Undo Mechanism: In text editors, the undo function often uses a stack to store previous
states.

 Balanced Parentheses Check: Stacks are used to check if the parentheses in an expression
are balanced.
Pros:

 Simple and efficient for LIFO operations.

 Push and pop operations are constant time, O(1).

Cons:

 Limited access: You can only access the top element, making certain operations (like
searching) inefficient.

Recursion

Definition:
Recursion is a programming technique where a function calls itself to solve smaller instances of the
same problem. It is particularly useful for problems that exhibit overlapping subproblems or can be
divided into similar subproblems.

Key Concepts:

1. Base Case: The condition under which the recursive calls stop, preventing infinite recursion.

2. Recursive Case: The portion of the function where it calls itself with modified parameters to
progress towards the base case.

Types of Recursion:

 Direct Recursion: A function directly calls itself.

 Indirect Recursion: A function calls another function, which in turn calls the first function.

Advantages:

 Simplifies code for problems like tree traversals, factorials, and Fibonacci sequences.

 Reduces the need for external data structures like stacks in certain scenarios.

Disadvantages:

 Higher memory usage due to stack frames for each recursive call.

 Slower execution compared to iterative approaches for deep recursion.


Queue

Definition:
A Queue is a linear data structure following the First In, First Out (FIFO) principle. It is analogous to a
real-world queue, like a line of people where the first person to join is the first to leave.

Key Operations:

1. Enqueue: Adding an element to the rear of the queue.

2. Dequeue: Removing an element from the front of the queue.

3. Peek/Front: Viewing the front element without removing it.

4. isEmpty: Checking whether the queue is empty.

Applications:

 Task scheduling in operating systems.

 Managing requests in servers.

 Breadth-first traversal in graphs.

Types of Queues:

1. Simple Queue: Implements basic FIFO behavior.

2. Circular Queue: Links the rear back to the front to reuse memory efficiently.

3. Priority Queue: Elements are dequeued based on priority instead of order.

4. Double-Ended Queue (Deque): Allows insertion and removal from both ends.

Tree and Heap (Detailed Explanation)

Tree

A tree is a hierarchical data structure used to represent relationships in a parent-child


manner. Each node contains data and references to its child nodes. Trees are widely used in
databases, file systems, and AI algorithms.

Terminologies:

1. Root: The topmost node with no parent.

2. Parent: A node with child nodes.

3. Child: A node derived from a parent node.

4. Leaf: A node with no children.

5. Height: The length of the longest path from a node to a leaf.

6. Depth: The number of edges from the root to a node.

Types of Trees:

1. Binary Tree: Each node has at most two children (left and right).
2. Binary Search Tree (BST): Left subtree nodes < parent; right subtree nodes > parent. Efficient
for searching, insertion, and deletion.

3. AVL Tree: A self-balancing BST where the height difference between left and right subtrees is
at most one.

4. Threaded Binary Tree: Null pointers in nodes are replaced with pointers to in-order
predecessors or successors for faster traversal.

cost adjecny matrix


edge set

Heap

A heap is a specialized tree-based data structure that satisfies the heap property:

 Max-Heap: Every parent node is greater than or equal to its children.

 Min-Heap: Every parent node is smaller than or equal to its children.

Properties:

1. Complete Binary Tree: All levels are fully filled except possibly the last, which is filled from
left to right.

2. Heap Property: Parent nodes dominate their children according to the heap type.

Operations:

1. Heapify: Transform an array into a heap.

2. Insertion: Add an element at the end and restore the heap property by comparing it with its
parent.

3. Deletion: Remove the root, replace it with the last element, and re-heapify.

Applications:

 Priority Queues: Efficient retrieval of the highest or lowest priority element.

 Heap Sort: Sorting elements using the heap structure.

Graph (Detailed Explanation)

A graph is a versatile data structure consisting of vertices (nodes) connected by edges (links).

Graph Terminologies

1. Vertices (Nodes): Represent entities.

2. Edges (Links): Represent relationships between vertices.

3. Directed Graph: Edges have a direction.

4. Undirected Graph: Edges do not have a direction.


5. Weighted Graph: Edges have weights representing cost, distance, or capacity.

Graph Representation

1. Adjacency Matrix:

o A 2D array where rows and columns represent vertices.

o Cell values indicate edge existence (1 for an edge, 0 for no edge).

o Example:

010

101

010

2. Adjacency List:

o An array of lists where each list contains neighbors of a vertex.

o Example:

1 → [2]

2 → [1, 3]

3 → [2]

Constructing a Graph from Given Nodes

To create a graph:

1. Define vertices: Identify all entities to be represented as nodes.

2. Establish connections: Determine relationships between nodes and add edges accordingly.

3. Choose a representation: Use adjacency matrix or list based on the problem's requirements.

Graph Traversals

Graph traversal involves visiting all vertices and edges systematically. The two primary
techniques are Breadth-First Search (BFS) and Depth-First Search (DFS).

Breadth-First Search (BFS)

Definition: BFS explores all vertices at the current depth before moving deeper. It uses a
queue to manage the traversal order.

Steps:

1. Start from a given source vertex and mark it as visited.

2. Add it to a queue.

3. Dequeue a vertex, process it, and enqueue all its unvisited neighbors.

4. Repeat until the queue is empty.


Example: Given graph:

1 → 2, 3

2 → 4, 5

3→6

BFS Order (starting at 1): 1 → 2 → 3 → 4 → 5 → 6

Depth-First Search (DFS)

Definition: DFS explores as far as possible along a branch before backtracking. It uses a stack
(explicit or via recursion).

Steps:

1. Start from a source vertex, mark it as visited, and process it.

2. Recursively visit all its unvisited neighbors.

3. Backtrack when no unvisited neighbors are left.

Example: Given graph:

1 → 2, 3

2 → 4, 5

3→6

DFS Order (starting at 1): 1 → 2 → 4 → 5 → 3 → 6

Practical Example: Building and Traversing a Graph

1. Graph Representation:

python

graph = {

1: [2, 3],

2: [4, 5],

3: [6],

4: [],

5: [],

6: []

2. BFS Traversal:
python

bfs(graph, 1) # Output: 1 2 3 4 5 6

3. DFS Traversal:

python

dfs(graph, 1) # Output: 1 2 4 5 3 6

algorithms
1. Array Operations

Insert an Element

Insert(arr, size, position, element):

1. if position < 0 or position > size:

Print "Invalid Position"

Return

2. Shift elements from position to the right:

for i from size-1 to position:

arr[i+1] = arr[i]

3. Insert element at position:

arr[position] = element

4. Increment size

Delete an Element

Delete(arr, size, position):

1. if position < 0 or position >= size:

Print "Invalid Position"


Return

2. Shift elements from position to the left:

for i from position to size-2:

arr[i] = arr[i+1]

3. Decrement size

2. Stack Operations (LIFO)

Push (Insertion)

Push(stack, size, item):

1. if TOP == size - 1:

Print "Stack Overflow"

Return

2. else:

TOP = TOP + 1

stack[TOP] = item

Pop (Deletion)

Pop(stack):

1. if TOP == -1:

Print "Stack Underflow"

Return

2. else:

item = stack[TOP]

TOP = TOP - 1

Return item

Peek (View Top Element)


Peek(stack):

1. if TOP == -1:

Print "Stack is Empty"

Return

2. else:

Return stack[TOP]

3. Queue Operations (FIFO)

Enqueue (Insertion)

Enqueue(queue, size, item):

1. if REAR == size - 1:

Print "Queue Overflow"

Return

2. else:

if FRONT == -1:

FRONT = 0

REAR = REAR + 1

queue[REAR] = item

Dequeue (Deletion)

Dequeue(queue):

1. if FRONT == -1 or FRONT > REAR:

Print "Queue Underflow"

Return

2. else:
item = queue[FRONT]

FRONT = FRONT + 1

Return item

Peek (View Front Element)

Peek(queue):

1. if FRONT == -1 or FRONT > REAR:

Print "Queue is Empty"

Return

2. else:

Return queue[FRONT]

4. Circular Linked List Operations

Insert at Front

InsertFrontCircular(head, item):

1. Create a new node with data = item

2. if head == NULL:

Set new_node.next = new_node

head = new_node

3. else:

Set new_node.next = head.next

head.next = new_node

Swap data of new_node and head

Delete a Specific Item


DeleteCircular(head, target):

1. if head == NULL:

Print "List is Empty"

Return

2. Set current = head and previous = NULL

3. do:

if current.data == target:

if previous == NULL:

head = head.next

else:

previous.next = current.next

Delete current

Return

previous = current

current = current.next

while current != head

4. Print "Item Not Found"

5. Binary Tree Operations

Preorder Traversal (Root, Left, Right)

PreOrder(root):

1. if root == NULL:

Return

2. Print root.data

3. PreOrder(root.left)

4. PreOrder(root.right)

Inorder Traversal (Left, Root, Right)


InOrder(root):

1. if root == NULL:

Return

2. InOrder(root.left)

3. Print root.data

4. InOrder(root.right)

Postorder Traversal (Left, Right, Root)

PostOrder(root):

1. if root == NULL:

Return

2. PostOrder(root.left)

3. PostOrder(root.right)

4. Print root.data

Level Order Traversal

LevelOrder(root):

1. if root == NULL:

Return

2. Create an empty queue

3. Enqueue root

4. while queue is not empty:

node = Dequeue(queue)

Print node.data

if node.left != NULL:

Enqueue(node.left)
if node.right != NULL:

Enqueue(node.right)

6. Graph Operations

Depth-First Search (DFS)

DFS(graph, start, visited):

1. Mark start as visited

2. Print start

3. for each neighbor of start:

if neighbor is not visited:

DFS(graph, neighbor, visited)

Breadth-First Search (BFS)

BFS(graph, start):

1. Create an empty queue and enqueue start

2. Mark start as visited

3. while queue is not empty:

node = Dequeue(queue)

Print node

for each neighbor of node:

if neighbor is not visited:

Enqueue(neighbor)

Mark neighbor as visited

Dijkstra’s Algorithm (Shortest Path)


Dijkstra(graph, source):

1. Create distance[] and set all values to infinity

2. Set distance[source] = 0

3. Create a priority queue and enqueue (source, 0)

4. while queue is not empty:

node = Dequeue(queue)

for each neighbor of node:

newDist = distance[node] + weight(node, neighbor)

if newDist < distance[neighbor]:

distance[neighbor] = newDist

Enqueue(neighbor, newDist)

5. Return distance[]

Prim's Algorithm (Minimum Spanning Tree)

Prims(graph, start):

1. Create MST[] and set all nodes as unvisited

2. Initialize MST cost = 0

3. Add start node to MST

4. while MST does not include all nodes:

Select the minimum edge connecting MST and a non-MST node

Add the edge to MST

Mark the node as visited

5. Return MST and cost

7. Sorting Algorithms

Quick Sort

QuickSort(arr, low, high):


1. if low < high:

PartitionIndex = Partition(arr, low, high)

QuickSort(arr, low, PartitionIndex - 1)

QuickSort(arr, PartitionIndex + 1, high)

Partition(arr, low, high):

1. Set pivot = arr[high]

2. i = low - 1

3. for j from low to high-1:

if arr[j] <= pivot:

i=i+1

Swap arr[i] and arr[j]

4. Swap arr[i+1] and arr[high]

5. Return i+1

Merge Sort

MergeSort(arr, left, right):

1. if left < right:

mid = (left + right) / 2

MergeSort(arr, left, mid)

MergeSort(arr, mid+1, right)

Merge(arr, left, mid, right)

Merge(arr, left, mid, right):

1. Create temp arrays L[] and R[]

2. Copy data into L[] and R[]

3. Merge L[] and R[] back into arr[]

4. Copy remaining elements, if any


1. Array Operations

Insertion in an Array

 Insert an element at a specific position.

 Steps:

1. Shift all elements from the specified position one step to the right.

2. Place the new element in the desired position.

3. Update the array size.

 Complexity: O(n) (in the worst case, all elements need to be shifted).

Deletion in an Array

 Remove an element at a specific position.

 Steps:

1. Shift all elements from the specified position one step to the left.

2. Update the array size.

 Complexity: O(n) (in the worst case, all elements after the position are shifted).

2. Stack Operations (LIFO)

Push (Insertion)

 Add an element to the top of the stack.

 Steps:

1. Check if the stack is full (overflow condition).

2. If not, increment the top pointer and place the element there.

 Complexity: O(1).

Pop (Deletion)

 Remove the top element of the stack.

 Steps:

1. Check if the stack is empty (underflow condition).

2. If not, return the top element and decrement the top pointer.

 Complexity: O(1).
Peek

 Retrieve the top element without removing it.

 Steps:

1. Check if the stack is empty.

2. If not, return the element at the top pointer.

 Complexity: O(1).

3. Queue Operations (FIFO)

Enqueue (Insertion)

 Add an element to the rear of the queue.

 Steps:

1. Check if the queue is full (overflow condition).

2. If not, increment the rear pointer and place the element there.

3. If it's the first element, also set the front pointer.

 Complexity: O(1).

Dequeue (Deletion)

 Remove the front element of the queue.

 Steps:

1. Check if the queue is empty (underflow condition).

2. If not, retrieve the element at the front pointer and increment it.

3. Reset pointers if the queue becomes empty.

 Complexity: O(1).

4. Circular Linked List Operations

Insert at Front

 Insert an element at the beginning of a circular linked list.

 Steps:

1. Create a new node.

2. Point the new node to the current head’s next node.

3. Update the head’s next pointer to the new node.


4. Swap the data of the new node and the head to maintain the head position.

 Complexity: O(1).

Delete a Specific Item

 Delete a specific node by its value.

 Steps:

1. Traverse the list until the target is found or the list loops back to the head.

2. Update the pointers of the previous node to skip the target node.

3. Delete the target node.

 Complexity: O(n).

5. Binary Tree Traversals

Preorder (Root → Left → Right)

 Visit the root node, then traverse the left subtree, and finally the right subtree.

 Use Case: Expression evaluation or prefix notation.

Inorder (Left → Root → Right)

 Traverse the left subtree, visit the root, then traverse the right subtree.

 Use Case: Retrieving sorted order of elements in a binary search tree.

Postorder (Left → Right → Root)

 Traverse the left subtree, the right subtree, and finally visit the root node.

 Use Case: Deleting a tree or postfix notation.

Level Order

 Visit nodes level by level from top to bottom.

 Use a queue to traverse the nodes.

 Use Case: Breadth-first processing of binary trees.

6. Graph Traversals

DFS (Depth-First Search)


 Explore as far down a branch as possible before backtracking.

 Use recursion or a stack.

 Use Case: Maze solving, checking connectivity.

 Complexity: O(V+E) (vertices + edges).

BFS (Breadth-First Search)

 Visit all nodes at one level before moving to the next.

 Use a queue.

 Use Case: Finding the shortest path in an unweighted graph.

 Complexity: O(V+E).

7. Sorting Algorithms

Quick Sort

 Divide and Conquer algorithm:

1. Partition the array around a pivot.

2. Recursively sort the left and right partitions.

 Complexity:

o Worst Case: O(n2) (if pivot is poorly chosen).

o Average Case: O(nlog n).

Merge Sort

 Divide and Conquer algorithm:

1. Divide the array into halves.

2. Recursively sort each half.

3. Merge the sorted halves.

 Complexity: O(nlog n).

Bubble Sort

 Repeatedly compare adjacent elements and swap them if needed.

 Optimized Bubble Sort:

o If no swaps occur in a pass, terminate early.


 Complexity:

o Worst Case: O(n2).

o Best Case: O(n) (if already sorted).

8. Graph Algorithms

Dijkstra’s Algorithm

 Find the shortest path from a source to all other vertices in a weighted graph.

 Steps:

1. Initialize distances as infinity.

2. Use a priority queue to process nodes in order of shortest known distance.

3. Relax edges and update distances.

 Complexity: O((V+E)logV)

Prim’s Algorithm

 Find a Minimum Spanning Tree (MST) of a graph.

 Steps:

1. Start with any vertex.

2. Add the smallest edge that connects a visited vertex to an unvisited vertex.

3. Repeat until all vertices are included.

 Complexity: O((V+E)logV)

You might also like