0% found this document useful (0 votes)
4 views

Data Structures and Algorithms Note

The document outlines a course on Data Structures and Algorithms (DSA 251 & BIT 6153) aimed at undergraduate students, covering essential concepts for problem-solving in computer science. It includes course objectives, learning outcomes, a detailed weekly breakdown of topics, and teaching methodologies, emphasizing practical applications and algorithm analysis. Key data structures and algorithms are defined, with real-life scenarios illustrating their use in various applications.

Uploaded by

dadiengalfred18
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views

Data Structures and Algorithms Note

The document outlines a course on Data Structures and Algorithms (DSA 251 & BIT 6153) aimed at undergraduate students, covering essential concepts for problem-solving in computer science. It includes course objectives, learning outcomes, a detailed weekly breakdown of topics, and teaching methodologies, emphasizing practical applications and algorithm analysis. Key data structures and algorithms are defined, with real-life scenarios illustrating their use in various applications.

Uploaded by

dadiengalfred18
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 72

Data Structures and Algorithms

COURSE INFORMATION

 Course Code: DSA 251 & BIT 6153


 Course Title: Data Structures and Algorithms
 Credit Hours: 3 Units
 Level: Undergraduate 200 Level
 Semester: First Semester
 Prerequisites: Basic knowledge of Programming (PHP)
 Lecture Hours: 3 Hours per Week

Course Description

This course introduces fundamental concepts of data structures and algorithms, which
are essential for problem-solving in computer science and software engineering.
Students will learn how to design, analyze, and implement efficient algorithms and
data structures to solve real-world problems. The course covers topics like arrays,
linked lists, stacks, queues, trees, graphs, sorting, searching, and algorithmic
complexity.

Course Aims

1. To provide students with a strong foundation in data structures and algorithms.


2. To develop problem-solving skills using various data structures.
3. To teach students how to analyze the time and space complexity of algorithms.
4. To enable students to apply data structures and algorithms to real-world software
development.

5. To prepare students for advanced courses in computer science and software engineering.
Course Objectives

By the end of this course, students should be able to:

1. Understand and explain the fundamental concepts of data structures.


2. Implement various data structures like arrays, linked lists, stacks, queues, trees, and graphs.
3. Design efficient algorithms for sorting, searching, and other operations.
4. Analyze the time and space complexity of algorithms using Big O notation.
5. Apply data structures and algorithms to solve complex programming problems.
6. Demonstrate proficiency in implementing data structures and algorithms using a
programming language ( PHP).

Learning Outcomes

Upon successful completion of this course, students will be able to:

1. Analyze problems and select appropriate data structures and algorithms for their solution.
2. Implement data structures and algorithms in a programming language.
3. Evaluate the performance of algorithms in terms of time and space complexity.
4. Apply data structures and algorithms in practical applications like databases, operating
systems, and software engineering projects.
5. Demonstrate critical thinking and problem-solving skills in the design and analysis of
algorithms.
Course Outline and Weekly Breakdown

Week Topics Details


Introduction to Data
Course overview, importance of data
1 Structures &
structures, algorithm analysis, Big O notation
Algorithms
Types of arrays, dynamic arrays, singly linked
2 Arrays & Linked Lists
lists, doubly linked lists
Implementation using arrays and linked lists,
3 Stacks & Queues
applications of stacks and queues
Concept of recursion, recursive algorithms,
4 Recursion
analyzing recursive functions
Hash functions, collision resolution
5 Hash Tables
techniques, applications of hash tables
Binary trees, binary search trees (BST), tree
6 Trees
traversals (in-order, pre-order, post-order)
AVL trees, Red-Black trees, B-trees,
7 Advanced Trees
applications of trees
Graph representations (adjacency matrix/list),
8 Graphs graph traversal (BFS & DFS), shortest path
algorithms
Bubble sort, insertion sort, selection sort, merge sort, quick sort,
9 Sorting Algorithms heap sort
Linear search, binary search, search
10 Searching Algorithms
algorithms on trees and graphs
Concept of dynamic programming, memoization,
11 Dynamic Programming solving problems like Fibonacci, knapsack
problem
Concept of greedy algorithms, examples like
12 Greedy Algorithms
coin change problem, job scheduling
Algorithm Complexity Time complexity, space complexity, analyzing
13
Analysis best, worst, and average cases
Review of all topics, Q&A, preparation for
14 Review and Revision
final exams
Final Project & Implementation of a data structure/algorithm-
15
Presentation based project, project presentation
Comprehensive assessment covering all course
16 Final Exam
topics
Teaching Methodology

 Lectures: Explanation of concepts using slides, whiteboard, and live coding.


 Lab Sessions: Hands-on coding exercises and problem-solving using programming languages
(PHP).
 Group Projects: Collaborative projects to design and implement data structures and
algorithms.
 Assignments: Weekly or bi-weekly assignments to reinforce learning.
 Class Discussions: Encouraging student participation and problem-solving during lectures.

Recommended Textbooks

1. "Data Structures and Algorithms Made Easy" by Narasimha Karumanchi


2. "Introduction to Algorithms" by Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest,
and Clifford Stein

3. "Data Structures and Algorithms in Python" by Michael T. Goodrich, Roberto Tamassia, and
Michael H. Goldwasser

4. "Algorithms" by Robert Sedgewick and Kevin Wayne


5. "Cracking the Coding Interview" by Gayle Laakmann McDowell (for problem-solving practice)

Course Policies

 Attendance: Students must attend at least 75% of classes to be eligible for the final exam.

 Late Submission: Assignments submitted late will incur a penalty of 10% per day.

 Academic Integrity: Plagiarism and cheating will result in disciplinary action, including

possible failure of the course.

 Office Hours: Available for consultation by appointment or during scheduled office hours.
Data Structures and Algorithms Explained

DATA STRUCTURES

A data structure is a way of organizing and storing data so that it can be accessed
and modified efficiently. It defines the relationship between the data elements and
provides a means of storing and retrieving them.

Common types of data structures include:

 Arrays: A collection of elements identified by index or key. They store elements of the same
data type in a contiguous block of memory.
 Linked Lists: A linear data structure where elements (nodes) are linked using pointers. Each
node contains data and a reference to the next node.
 Stacks: A collection of elements that follow the Last In, First Out (LIFO) principle. Operations
are done at one end, called the "top."
 Queues: A collection of elements that follow the First In, First Out (FIFO) principle. Elements
are inserted at the rear and removed from the front.
 Trees: A hierarchical data structure that consists of nodes, with each node having a value and
references to its child nodes.
 Graphs: A collection of nodes (vertices) and edges connecting them. Graphs can represent
complex relationships like social networks or transportation systems.

Algorithms

An algorithm is a step-by-step procedure or formula for solving a problem.


Algorithms take input, process it, and provide output. They are often evaluated based
on their efficiency in terms of time complexity (how long the algorithm takes to
complete) and space complexity (how much memory it requires).

Common algorithms include:

 Sorting Algorithms: Used to reorder a collection of elements in a particular order, such as


Quick-Sort, Merge-Sort, and Bubble-Sort.
 Searching Algorithms: Used to find an element in a data structure, such as Binary Search and
Linear Search.
 Graph Algorithms: Used to solve problems like finding the shortest path between nodes (e.g.,
Dijkstra's algorithm).
 Dynamic Programming: A method for solving problems by breaking them down into simpler
subproblems and storing their solutions (e.g., Fibonacci sequence).

10 Real-Life Scenarios for Data Structures and Algorithms

1. Navigation Systems (Shortest Path Search)

Data Structure: Graphs

Algorithm: Dijkstra's Algorithm or A* Algorithm

Scenario: When using a GPS for driving or walking directions, the system treats the map as a graph,
with roads as edges and intersections as nodes. Algorithms like Dijkstra's are used to find the shortest
route from one location to another, minimizing time or distance.

2. Social Media Feed (Queue)

Data Structure: Queue

Algorithm: FIFO (First In, First Out)

Scenario: Social media platforms like Facebook or Twitter manage your newsfeed using a queue.
Posts, comments, and notifications are processed in the order they arrive. The oldest posts (first in)
are shown first, and as new content is posted (new in), it replaces older content in the feed.

3. Web Page Caching (Hashing)

Data Structure: Hash Table

Algorithm: Hashing

Scenario: When you visit a website, images, text, and other resources may be stored temporarily in
your browser cache. A hash table is used to store these resources by their URLs (keys) and the
corresponding data (values). This allows for fast retrieval of resources on subsequent visits, reducing
load times.

4. Undo Function in Software (Stack)

Data Structure: Stack

Algorithm: LIFO (Last In, First Out)

Scenario: When using word processors, design software, or even online platforms, the "undo"
function relies on a stack data structure. Each action you take (e.g., typing, deleting) is pushed onto
the stack. When you press "Undo," the most recent action (last in) is popped off and reverted.
5. Product Recommendation Systems (Sorting and Searching)

Data Structure: Array, Tree, Hash Table

Algorithm: Sorting, Searching, Collaborative Filtering

Scenario: E-commerce websites like Amazon use sorting algorithms to rank products based on
popularity, reviews, or other criteria. They also use collaborative filtering algorithms to recommend
products to users by analyzing past purchasing patterns (finding similar users or products).

6. Autocomplete Feature (Trie Data Structure)

Data Structure: Trie (Prefix Tree)

Algorithm: Prefix Matching

Scenario: When you start typing in a search bar or text input field (e.g., Google Search, email client, or
code editor), the autocomplete feature suggests words or phrases based on the characters you've
typed. A trie data structure is commonly used to efficiently store and search for prefix-based matches,
making autocomplete fast even with large datasets of words.

7. Bank Account Transactions (Linked List)

Data Structure: Linked List

Algorithm: Traversing and Insertion/Deletion

Scenario: A bank might use a linked list to track the sequence of transactions on an account. Each
node in the linked list represents a transaction (such as deposit or withdrawal) and contains details
like transaction amount, time, and type. Linked lists allow for efficient insertion and deletion,
especially when managing dynamic transaction histories that may vary in length.

8. Real-Time Online Multiplayer Games (Graph)

Data Structure: Graph

Algorithm: Depth-First Search (DFS) or Breadth-First Search (BFS)

Scenario: In real-time online multiplayer games, like those involving team-based interactions, a graph
can represent player connections (nodes) and their relationships (edges). Algorithms like DFS or BFS
can be used to identify groups of connected players (e.g., enemies, allies), analyze distances between
players, or even find the optimal path for players' movements.
9. Flight Reservation System (Heap/Priority Queue)

Data Structure: Heap (Priority Queue)

Algorithm: Sorting or Scheduling Algorithm

Scenario: In flight reservation systems, the priority queue (often implemented with a heap) is used to
manage flight tickets based on their urgency or availability. For example, flights may be prioritized
based on factors such as seat availability, flight duration, and pricing, ensuring that customers get the
most relevant options first when booking.

10. Data Compression (Huffman Coding)

Data Structure: Tree (Huffman Tree)

Algorithm: Huffman Coding

Scenario: In data compression tools like ZIP files or MP3 audio compression, Huffman coding is used
to reduce file size. The algorithm builds a tree where frequently occurring characters (like letters or
data symbols) are assigned shorter codes, and less frequent characters are assigned longer codes.
This minimizes the overall size of the data, making it more efficient for storage or transmission.
Week 1: Introduction to Data Structures &
Algorithms
1. Course Overview

1.1 Course Objectives

 Understand the fundamental concepts of data structures and algorithms.


 Learn how to implement various data structures and design efficient algorithms.
 Analyze the performance of algorithms using Big O notation.
 Develop problem-solving skills applicable to real-world software engineering.

1.2 Course Structure

 Lectures: Theoretical explanations and examples.


 Lab Sessions: Practical coding exercises.
 Assignments & Quizzes: Regular assessments to reinforce learning.
 Projects: Real-world application of data structures and algorithms.
 Exams: Midterm and final exams for comprehensive evaluation.

2. Importance of Data Structures and Algorithms

Data structures and algorithms are fundamental to computer science and software
engineering. They enable us to:

 Efficiently manage data in software applications (e.g., databases, file systems, and search
engines).
 Optimize performance by using the right data structures and algorithms, which can
significantly affect the speed and efficiency of programs.
 Solve complex problems in various fields like artificial intelligence, machine learning, web
development, and data analytics.
 Improve coding skills for technical interviews and competitive programming.

3. What are Data Structures?

3.1 Definition

A data structure is a way of organizing and storing data in a computer so that it can be
accessed and modified efficiently. Data structures define the layout of data in memory,
enabling efficient data retrieval, manipulation, and storage.
3.2 Types of Data Structures

Data structures can be broadly classified into two categories:

 Primitive Data Structures: Basic structures like integers, floats, characters, and pointers.
 Non-Primitive Data Structures: More complex structures that can hold multiple values.

o Linear Data Structures: Elements are arranged in a sequential manner (e.g., arrays,
linked lists, stacks, queues).
o Non-Linear Data Structures: Elements are arranged in a hierarchical or
interconnected manner (e.g., trees, graphs, heaps).

3.3 Common Data Structures

 Arrays: Collection of elements stored in contiguous memory locations.


 Linked Lists: Collection of nodes where each node contains data and a reference to the next
node.
 Stacks: Last-In, First-Out (LIFO) data structure used in function call management, undo
mechanisms, etc.
 Queues: First-In, First-Out (FIFO) data structure used in scheduling tasks, buffering, etc.
 Trees: Hierarchical data structure with nodes connected by edges (e.g., binary trees, binary
search trees).
 Graphs: Collection of nodes (vertices) connected by edges, used to represent networks,
paths, and relationships.
 Hash Tables: Data structure that stores data in key-value pairs for fast access.

4. What are Algorithms?

4.1 Definition

An algorithm is a step-by-step procedure or a set of rules for solving a specific


problem or performing a computation. It is a sequence of instructions that the
computer follows to achieve a particular goal.

4.2 Characteristics of a Good Algorithm

 Correctness: The algorithm should produce the correct output for all valid inputs.
 Efficiency: The algorithm should be optimized for time and space, minimizing computational
resources.
 Clarity: The algorithm should be easy to understand, write, and debug.
 Scalability: The algorithm should perform well as the input size grows.

4.3 Examples of Algorithms

 Sorting Algorithms: Arrange data in a specific order (e.g., Bubble Sort, Quick Sort, Merge
Sort).
 Searching Algorithms: Find specific elements within data (e.g., Linear Search, Binary Search).
 Graph Algorithms: Traverse and analyze graphs (e.g., Depth-First Search (DFS), Breadth-First
Search (BFS)).
 Dynamic Programming: Solve complex problems by breaking them into simpler subproblems
(e.g., Fibonacci sequence, Knapsack problem).

5. Algorithm Analysis

5.1 Why Analyze Algorithms?

Analyzing algorithms helps us understand their efficiency in terms of time and space.
It answers questions like:

 How fast is the algorithm?


 How much memory does the algorithm use?
 How does the algorithm scale with larger input sizes?

5.2 Metrics for Analysis

 Time Complexity: Measures the amount of time an algorithm takes to complete as a


function of the input size (n).
 Space Complexity: Measures the amount of memory an algorithm uses as a function of the
input size (n).

6. Introduction to Big O Notation

6.1 What is Big O Notation?

Big O notation is a mathematical notation used to describe the upper bound of an


algorithm's time or space complexity. It provides a way to analyze the worst-case
scenario of how an algorithm's performance will grow as the size of the input
increases.
6.2 Big O Complexity Classes

Notation Name Example Description


Executes in the same
Constant Accessing an element
O(1) amount of time, regardless
Time in an array
of input size
Logarithmic Reduces problem size by
O(log n) Binary Search
Time half each time
Time increases
O(n) Linear Time Linear Search proportionally with input
size
O(n log Log-Linear Merge Sort, Quick Combination of linear and
n) Time Sort logarithmic complexity
Bubble Sort, Time increases
Quadratic
O(n²) Insertion Sort quadratically with input
Time
(worst case) size
Exponential Recursive algorithms Time doubles with each
O(2ⁿ)
Time like Tower of Hanoi additional input element
Factorial Permutations, Extremely slow, used in
O(n!)
Time solving TSP brute force solutions

6.3 Examples of Big O Analysis

1. Accessing an element in an array: O(1)


2. Linear Search: O(n)
3. Binary Search: O(log n)
4. Bubble Sort (worst case): O(n²)

7. Practical Examples

Example 1: Constant Time Complexity (O(1))

<?php
function getFirstElement($arr) {
return $arr[0];
}

$array = [10, 20, 30, 40];


echo "First element: " . getFirstElement($array); // Output: First element: 10
?>
Example 2: Linear Time Complexity (O(n))

<?php
function printAllElements($arr) {
foreach ($arr as $element) {
echo $element . "\n";
}
}

$array = [10, 20, 30, 40];


printAllElements($array); // Output: 10, 20, 30, 40 (each on a new line)
?>
Example 3: Logarithmic Time Complexity (O(log n))

<?php
function binarySearch($arr, $target) {
$left = 0;
$right = count($arr) - 1;

while ($left <= $right) {


$mid = floor(($left + $right) / 2);

if ($arr[$mid] == $target) {
return $mid; // Target found at index $mid
} elseif ($arr[$mid] < $target) {
$left = $mid + 1; // Search in the right half
} else {
$right = $mid - 1; // Search in the left half
}
}

return -1; // Target not found


}

// Example usage
$sortedArray = [1, 3, 5, 7, 9, 11, 13, 15];
$target = 7;

$result = binarySearch($sortedArray, $target);


if ($result !== -1) {
echo "Element found at index " . $result; // Output: Element found at index 3
} else {
echo "Element not found";
}
?>

Example 4: Quadratic Time Complexity (O(n²))

<?php
function printPairs($arr) {
$length = count($arr);
for ($i = 0; $i < $length; $i++) {
for ($j = 0; $j < $length; $j++) {
echo "(" . $arr[$i] . ", " . $arr[$j] . ")\n";
}
}
}

$array = [1, 2, 3];


printPairs($array);
/* Output:
(1, 1)
(1, 2)
(1, 3)
(2, 1)
(2, 2)
(2, 3)
(3, 1)
(3, 2)
(3, 3)
*/
?>

8. Summary

 Data structures and algorithms are essential for optimizing software performance.
 Understanding time and space complexity helps in choosing the right approach for solving
problems.
 Big O notation is a powerful tool for analyzing the efficiency of algorithms.

9. Practice Questions

1. What is the time complexity of accessing an element in a hash table?


2. Implement a function to find the maximum element in an array and analyze its time
complexity.
3. Write a php function to reverse a linked list and discuss its time complexity.
4. Explain the difference between O(n log n) and O(n²) complexities with examples.

10. Assignment

 Task: Write a program that takes an array of integers and returns the sorted array using
Bubble Sort. Analyze the time complexity of your solution.
Lecture Note: Arrays & Linked Lists

1. Course Overview
 Topic: Arrays & Linked Lists
 Duration: 3 hours
 Objective: Understand the various types of arrays and linked lists, their operations,
advantages, disadvantages, and practical use cases.

2. Learning Objectives
By the end of this lecture, students should be able to:

1. Understand the concept of arrays and linked lists.


2. Differentiate between static arrays, dynamic arrays, singly linked lists, and doubly linked lists.
3. Implement arrays and linked lists using PHP.
4. Analyze the time and space complexity of basic operations (insertion, deletion, traversal, and
searching).

3. Introduction
What are Arrays?

 An array is a data structure that stores a fixed-size sequence of elements of the same data
type.
 Arrays provide random access to elements using an index, which makes retrieval very
efficient.

What are Linked Lists?

 A linked list is a linear data structure where each element (called a node) contains a value
and a reference (or link) to the next element in the sequence.
 Linked lists do not have a fixed size, which allows them to grow or shrink dynamically.

4. Types of Arrays
4.1. Static Arrays

 Definition: Arrays with a fixed size determined at the time of declaration.


 Advantages: Simple to use, fast access via index.
 Disadvantages: Fixed size, which can lead to wasted memory or overflow issues.

Example in PHP:
<?php

// Static Array

$numbers = [10, 20, 30, 40, 50];

echo "Element at index 2: " . $numbers[2]; // Output: 30

?>

4.2. Dynamic Arrays

 Definition: Arrays that can grow or shrink in size during runtime.


 Advantages: Flexible size, better memory management.
 Disadvantages: May involve overhead due to resizing.

Example in PHP:

<?php

// Dynamic Array

$colors = [];

array_push($colors, "Red", "Green", "Blue");

print_r($colors);

// Output: Array ( [0] => Red [1] => Green [2] => Blue )

?>

Common Operations on Arrays

Operation Time Complexity


Access O(1)
Insertion O(n) (worst case if resizing)
Deletion O(n)
O(n) (Linear Search) or O(log n) (Binary
Searching
Search if sorted)
5. Linked Lists
5.1. Singly Linked Lists

 Definition: A linked list where each node contains data and a reference to the next node.
 Structure:

o Node: Contains data and a pointer to the next node.


o Head: The first node in the list.
o Tail: The last node, which points to NULL.

Example in PHP:

<?php

// Node class for Singly Linked List

class Node {

public $data;

public $next;

public function __construct($data) {

$this->data = $data;

$this->next = null;

class SinglyLinkedList {

private $head;

public function __construct() {

$this->head = null;

}
// Add node at the end

public function append($data) {

$newNode = new Node($data);

if ($this->head === null) {

$this->head = $newNode;

} else {

$current = $this->head;

while ($current->next !== null) {

$current = $current->next;

$current->next = $newNode;

// Display all nodes

public function display() {

$current = $this->head;

while ($current !== null) {

echo $current->data . " -> ";

$current = $current->next;

echo "NULL\n";

}
}

$list = new SinglyLinkedList();

$list->append(10);

$list->append(20);

$list->append(30);

$list->display();

// Output: 10 -> 20 -> 30 -> NULL

?>

Advantages of Singly Linked Lists

 Dynamic size.
 Easy insertion and deletion at the beginning.

Disadvantages of Singly Linked Lists

 No direct access to elements (must traverse from the head).


 Extra memory is required for storing pointers.

5.2. Doubly Linked Lists

 Definition: A linked list where each node contains data, a reference to the next node, and a
reference to the previous node.
 Structure:

o Node: Contains data, a pointer to the next node, and a pointer to the previous node.
o Head: The first node.
o Tail: The last node.

Example in PHP:

<?php

// Node class for Doubly Linked List

class DoublyNode {

public $data;

public $prev;
public $next;

public function __construct($data) {

$this->data = $data;

$this->prev = null;

$this->next = null;

class DoublyLinkedList {

private $head;

public function __construct() {

$this->head = null;

// Add node at the end

public function append($data) {

$newNode = new DoublyNode($data);

if ($this->head === null) {

$this->head = $newNode;

} else {

$current = $this->head;

while ($current->next !== null) {

$current = $current->next;

$current->next = $newNode;
$newNode->prev = $current;

// Display all nodes

public function display() {

$current = $this->head;

while ($current !== null) {

echo $current->data . " <-> ";

$current = $current->next;

echo "NULL\n";

$doublyList = new DoublyLinkedList();

$doublyList->append(5);

$doublyList->append(15);

$doublyList->append(25);

$doublyList->display();

// Output: 5 <-> 15 <-> 25 <-> NULL

?>
Advantages of Doubly Linked Lists

 Can be traversed in both directions.


 Easier deletion of a given node (compared to singly linked lists).

Disadvantages of Doubly Linked Lists

 Requires more memory for storing an extra pointer.


 Insertion and deletion involve updating two pointers.

6. Comparison of Arrays and Linked Lists


Singly Linked
Feature Arrays Doubly Linked Lists
Lists
Fixed (Static)
Size Dynamic Dynamic
or Dynamic
Access Time
O(1) O(n) O(n)
(Index)
Insertion
O(n) O(1) O(1)
(Beginning)
Deletion (Given
O(n) O(n) O(1)
Node)
More (due to Even more (due to 2
Memory Usage Less
pointers) pointers)

7. Summary
 Arrays and linked lists are fundamental data structures with distinct advantages and trade-
offs.
 Arrays are suitable for scenarios requiring fast access to elements.
 Linked lists are ideal for applications requiring frequent insertions and deletions.

8. Practice Questions
1. Implement a function in PHP to reverse a singly linked list.
2. Write a PHP function to search for a specific element in a doubly linked list.
3. Compare the performance of array insertion vs. linked list insertion for 10,000 elements.
Lecture Note: Stacks & Queues

1. Introduction
Stacks

 A stack is a linear data structure that follows the Last In, First Out (LIFO) principle.
 Think of it like a stack of books: the last book added is the first one to be removed.

Queues

 A queue is a linear data structure that follows the First In, First Out (FIFO) principle.
 Imagine a line of people waiting for a bus: the first person in line is the first to board the bus.

2. Learning Objectives
By the end of this lecture, you will be able to:

1. Understand the concepts of stacks and queues.


2. Implement stacks and queues using arrays in PHP.
3. Identify real-world applications of stacks and queues.
4. Analyze the time complexity of basic operations.

3. Stack Operations
Operation Description Time Complexity
push Adds an element to the top of the stack O(1)
pop Removes the top element O(1)
peek Returns the top element without removing it O(1)
isEmpty Checks if the stack is empty O(1)
Stack Implementation using Arrays in PHP

Stack Implementation using Linked Lists in PHP


Applications of Stacks

 Expression Evaluation: Used in evaluating postfix, prefix, and infix expressions.


 Backtracking Algorithms: Examples include solving mazes and the Tower of Hanoi.
 Undo Mechanism in Text Editors: Used to track the history of actions.
 Browser History: Navigating back and forth between web pages.

4. Queues

Queue Operations
Time
Operation Description
Complexity
enqueue Adds an element to the rear of the queue O(1)
dequeue Removes the front element O(1)
Returns the front element without removing
peek O(1)
it
isEmpty Checks if the queue is empty O(1)

Queue Implementation using Arrays in PHP


Queue Implementation using Linked Lists in PHP

Applications of Queues
 Scheduling Algorithms: Used in operating systems for task scheduling (e.g., CPU scheduling)..
 Breadth-First Search (BFS): Used in graph traversal.
 Print Queue: Managing print jobs in printers.
 Call Center Systems: Managing customer calls in the order they arrive.

5. Key Differences Between Stacks & Queues


Feature Stack (LIFO) Queue (FIFO)
Insertion At the top At the rear
Removal From the top From the front
Use Cases Backtracking, Expression Evaluation Scheduling, BFS
6.Summary
 Stacks follow the LIFO principle, while queues follow the FIFO principle.
 Stacks and queues can be implemented using both arrays and linked lists.
 Stacks are useful in scenarios like backtracking and expression evaluation, while queues are
ideal for task scheduling and managing processes in sequential order.

7. Practice Questions
1. Implement a stack that supports the min() function, which returns the minimum element
in the stack in O(1) time.
2. Write a function to reverse a queue using a stack.
3. Implement a circular queue using an array in PHP.
4. Explain the differences between stacks and queues with real-world examples.
Lecture Note: Recursion

1. Introduction to Recursion

What is Recursion?

 Recursion is a technique in programming where a function calls itself in order to solve a


problem.
 A recursive function solves a problem by breaking it down into smaller sub-problems of the
same type.
 The function continues to call itself with modified parameters until it reaches a base case,
which stops the recursion.

Key Concepts

 Base Case: The condition under which the recursion stops. Without a base case, the function
would call itself indefinitely, leading to a stack overflow.
 Recursive Case: The part of the function where it calls itself with new parameters, aiming to
reach the base case.

2. Learning Objectives
By the end of this lecture, you will be able to:

1. Understand the concept of recursion and how it works.


2. Write simple recursive functions in PHP.
3. Analyze the time and space complexity of recursive algorithms.
4. Identify problems that can be efficiently solved using recursion.

3. Recursive Algorithms
Structure of a Recursive Function

A recursive function typically follows this structure:


Example 1: Factorial Calculation

 The factorial of a number nnn (denoted as n!n!n!) is the product of all positive integers less
than or equal to nnn.
 Mathematically: n!=n×(n−1)!n! = n \times (n - 1)!n!=n×(n−1)!, with the base case being 0!=10!
= 10!=1.

Recursive PHP Implementation:

Explanation

 If n = 5
o factorial(5) = 5 × factorial(4)
o factorial(4)= 4 × factorial(3)
o factorial(3)= 3 × factorial(2)
o factorial(2)= 2 × factorial(1)
o factorial(1)= 1 × factorial(0)
o factorial(0) = 1 (Base Case)

4. Analyzing Recursive Functions


Time Complexity

 The time complexity of a recursive function depends on the number of times the function
calls itself.
 For the factorial function, the time complexity is O(n) since it makes n calls.

Space Complexity

 The space complexity includes the memory used by the function call stack.
 Each recursive call adds a new frame to the stack, so the space complexity is also O(n) for the
factorial function.

Example 2: Fibonacci Sequence


 The Fibonacci Sequence is a series of numbers where each number is the sum of the two
preceding ones, starting with 0 and 1.

 Mathematically: F(n)= F(n−1)+F(n−2), with base cases F(0)= 0


and F(1)=1.

Recursive PHP Implementation:

Explanation

 If n=6:

o fibonacci(6)= fibonacci(5) + fibonacci(4)


o fibonacci(5)= fibonacci(4) + fibonacci(3), and so on.
 Time Complexity: O(2^n) (Exponential), because it recalculates values multiple times.
 Space Complexity: O(n), due to the depth of the recursion stack.

6. When to Use Recursion


Best Suited For:

 Problems that can be divided into smaller, similar sub-problems.


 Examples: Tree traversal, Graph traversal (DFS), and problems like Tower of Hanoi.

Avoid Recursion If:

 The problem can be solved iteratively with better performance.


 Recursion might lead to stack overflow due to deep recursion (e.g., very large input sizes).

7. Practice Questions
1. Write a recursive function in PHP to reverse a string.
2. Implement a recursive function to find the sum of digits of a given number.
3. Write a PHP function using recursion to determine if a number is a palindrome.

8. Summary
 Recursion is a powerful concept where a function calls itself to solve a problem.
 Always define a base case to avoid infinite recursion.
 Analyze time and space complexity to determine if recursion is efficient for your problem.
 Practice with simple examples to build a strong foundation in recursion.

Lecture Note: Hash Tables


1. Introduction to Hash Tables
What is a Hash Table?

 A hash table is a data structure that stores key-value pairs.


 It uses a hash function to compute an index (also called a hash code) into an array of buckets
or slots, where the desired value can be found.
 The goal is to achieve fast data retrieval in O(1) time for search, insertion, and deletion
operations in the average case.

Real-World Analogy

 Think of a hash table like an index in a book:

o You have a keyword (key).


o The index tells you exactly where to find the information (value) in the book.

2. Learning Objectives
By the end of this lecture, you will be able to:

1. Understand the concept of hash functions and their role in hash tables.
2. Implement a simple hash table in PHP.
3. Understand collision resolution techniques.
4. Explore real-world applications of hash tables.

3. Hash Functions
What is a Hash Function?

 A hash function is a function that takes an input (a key) and returns a fixed-size string of
bytes. The output is typically an index that corresponds to a position in the hash table.
 Example: hash(key) → index.

Properties of a Good Hash Function:

1. Deterministic: The same input always produces the same output.


2. Efficient: Computes quickly.
3. Uniform Distribution: Distributes keys evenly across the hash table to minimize collisions.
4. Minimize Collisions: Two different keys should rarely produce the same index.
4. Collisions in Hash Tables
What is a Collision?

 A collision occurs when two different keys produce the same hash index.
 Collisions are unavoidable in most hash table implementations, so it's important to have
strategies to handle them.

Collision Resolution Techniques

1. Separate Chaining

 Each bucket in the hash table stores a linked list of entries that hash to the same index.
 If a collision occurs, the new key-value pair is added to the end of the linked list at that index.

Pros:

 Simple to implement.
 Dynamically handles an unlimited number of collisions.

Cons:

 Requires additional memory for pointers.


 Can degrade performance to O(n) if many collisions occur.

2. Open Addressing

Instead of storing multiple elements in the same bucket, open addressing looks
for the next available slot using a probing sequence.

Types of Probing:

o Linear Probing: Check the next slot in sequence (index + 1).


o Quadratic Probing: Check slots using a quadratic function (index + 1^2, index + 2^2,
etc.).
o Double Hashing: Use a second hash function to determine the probing sequence.

Pros:

 More memory efficient than separate chaining.


 Can achieve better cache performance.

Cons:

 Can suffer from clustering (multiple keys clumping together).


 Degrades to O(n) in the worst case.
5. Simple PHP Implementation of a Hash Table
Using Separate Chaining

6. Applications of Hash Tables


1. Caching and Lookup Tables

 Hash tables are commonly used in caches to store frequently accessed data for quick
retrieval.

2. Database Indexing

 Hash tables are used in databases to speed up the search for data records using keys.

3. Symbol Tables in Compilers

 Hash tables are used to store variables and their values for quick access during program
compilation.
4. Counting Frequencies

 Useful for counting the occurrence of elements in a collection (e.g., word frequency in a
document).

7. Analyzing Hash Table Performance


Operation Average Time Complexity Worst-Case Time Complexity
Search O(1) O(n)
Insert O(1) O(n)
Delete O(1) O(n)

 Average Case: Most operations in hash tables are O(1) because a good hash function
distributes keys evenly.
 Worst Case: Degrades to O(n) if all keys hash to the same index (extremely rare with a good
hash function).

8. Summary
 Hash tables provide an efficient way to store and retrieve data with average O(1) time
complexity.
 Choosing a good hash function and collision resolution technique is crucial for performance.
 Hash tables are widely used in real-world applications like caching, database indexing, and
counting frequencies.

9. Practice Questions
1. Implement a hash table in PHP using linear probing for collision resolution.
2. Write a function to count the frequency of each character in a given string using a hash table.
3. Explain how hash tables are used in the implementation of caching systems.
Lecture Note: Trees

1. Introduction to Trees
What is a Tree?

 A tree is a hierarchical data structure consisting of nodes connected by edges.


 The topmost node is called the root, and the nodes that have no children are called leaf
nodes.
 Each node contains:
1. A value (data).
2. References (or links) to its children.

Basic Terminology:

 Node: A single element in the tree, containing data and references to child nodes.
 Edge: The connection between nodes.
 Root: The topmost node of the tree.
 Leaf node: A node with no children.
 Parent: A node that has one or more child nodes.
 Child: A node that is a descendant of another node.

2. Binary Trees
What is a Binary Tree?

 A binary tree is a type of tree in which each node has at most two children (left and right).
 Each node in a binary tree has the following properties:

o Left Child: The left child node.


o Right Child: The right child node.

Example:
Types of Binary Trees:

 Full Binary Tree: Every node has either 0 or 2 children.


 Complete Binary Tree: All levels of the tree are fully filled except possibly for the last level,
which is filled from left to right.
 Perfect Binary Tree: All internal nodes have two children, and all leaf nodes are at the same
level.

3. Binary Search Trees (BST)


What is a Binary Search Tree?

 A binary search tree (BST) is a binary tree with the following properties:

1. The left subtree of a node contains only nodes with keys less than the node's key.
2. The right subtree of a node contains only nodes with keys greater than the node's
key.
3. Both the left and right subtrees are also binary search trees.

Example:

10
/ \
5 15
/ \ \
3 7 20

 In a BST, for the node with key 10:

o Left subtree contains 5, 3, 7 (all less than 10).


o Right subtree contains 15, 20 (all greater than 10).

Operations on BST:

1. Insertion: Insert a new key by traversing from the root, following the BST property.
2. Search: Search for a key by comparing with the current node, recursively going to the left or
right subtree.
3. Deletion: Deleting a node in a BST involves three cases:

o If the node has no children (leaf node), simply remove it.


o If the node has one child, remove the node and replace it with its child.
o If the node has two children, replace it with its in-order successor (smallest node in
the right subtree).
4. Tree Traversals
What is Tree Traversal?

 Tree traversal refers to the process of visiting all the nodes in a tree in a specific order. There
are three main types of depth-first traversal methods:

o In-order traversal (Left, Root, Right)


o Pre-order traversal (Root, Left, Right)
o Post-order traversal (Left, Right, Root)

Each traversal method gives different ways to process the nodes.

4.1. In-order Traversal

 In-order traversal visits the left subtree, the root node, and then the right subtree.
 This traversal method is particularly useful for binary search trees because it visits the nodes
in ascending order.

In-order Traversal Algorithm:

1. Traverse the left subtree.


2. Visit the root node.
3. Traverse the right subtree.

Example:

For the tree:

10
/ \
5 15
/ \ \
3 7 20

The in-order traversal would be:

3, 5, 7, 10, 15, 20

4.2. Pre-order Traversal

 Pre-order traversal visits the root node first, then the left subtree, and finally the right
subtree.

Pre-order Traversal Algorithm:

1. Visit the root node.


2. Traverse the left subtree.
3. Traverse the right subtree.

Example:

For the tree:

10
/ \
5 15
/ \ \
3 7 20

The pre-order traversal would be:

10, 5, 3, 7, 15, 20

4.3. Post-order Traversal

 Post-order traversal visits the left subtree, the right subtree, and then the root node.

Post-order Traversal Algorithm:

1. Traverse the left subtree.


2. Traverse the right subtree.
3. Visit the root node.

Example:

For the tree:

10
/ \
5 15
/ \ \
3 7 20

The post-order traversal would be:

3, 7, 5, 20, 15, 10
5. Simple PHP Code for Binary Search Tree
Here’s a simple implementation of a binary search tree (BST) with in-order
traversal:

6. Summary
 Binary Tree: A tree in which each node has at most two children.
 Binary Search Tree (BST): A binary tree with properties that make it efficient for search,
insertion, and deletion operations.
 Tree Traversals: Methods for visiting all nodes in a tree in a specific order, such as in-order,
pre-order, and post-order.
 The in-order traversal of a BST gives the nodes in sorted order.

7. Practice Questions
1. Implement a binary search tree in PHP and perform all three types of traversals.
2. Write a function to search for a node in a binary search tree.
3. Implement the deletion operation for a node in a binary search tree.
Lecture Note: Advanced Trees

1. Introduction to Advanced Trees


What are Advanced Trees?

 Advanced trees are specialized types of binary trees used to optimize certain operations and
solve specific problems that basic binary trees (e.g., Binary Search Trees or BST) cannot
efficiently handle.
 These trees have self-balancing properties, meaning they automatically adjust their structure
to maintain certain performance guarantees, such as logarithmic time complexity for search,
insert, and delete operations.

In this lecture, we will cover:

 AVL Trees
 Red-Black Trees
 B-Trees

2. AVL Trees
What is an AVL Tree?

 An AVL Tree is a type of self-balancing binary search tree (BST).


 The balance factor of a node is the difference between the height of its left and right
subtrees. In an AVL tree, the balance factor must be -1, 0, or +1 for every node.
 If the balance factor of any node exceeds this range, the tree performs a rotation to restore
balance.

Rotations in AVL Trees

 Left Rotation (Single Rotation): Used when the right subtree is taller than the left.
 Right Rotation (Single Rotation): Used when the left subtree is taller than the right.
 Left-Right Rotation (Double Rotation): Used when a left rotation is followed by a right
rotation.
 Right-Left Rotation (Double Rotation): Used when a right rotation is followed by a left
rotation.
Example:

Operations in AVL Trees:

 Insertion: Similar to BST insertion, but requires balancing using rotations.


 Deletion: Removes a node and then balances the tree, if necessary.
 Search: Same as BST, but time complexity is O(log n) due to balancing.

3. Red-Black Trees
What is a Red-Black Tree?

 A Red-Black Tree is another type of self-balancing binary search tree with the following
properties:

1. Each node is either red or black.


2. The root is always black.
3. Every leaf is black.
4. If a red node has children, then the children must be black (no two red nodes can be
adjacent).
5. For every node, the number of black nodes from the node to all of its descendant
leaves is the same (black-height).

These rules ensure that the tree remains approximately balanced, providing O(log n)
time complexity for insertion, deletion, and searching operations.

Example:
Operations in Red-Black Trees:

 Insertion: Insert a node like in a BST, but after insertion, the tree may need to be recolored
or rotated to maintain the red-black properties.
 Deletion: A bit more complex than in AVL trees, but it ensures that the tree remains
balanced.
 Search: Same as BST, but with guaranteed O(log n) time complexity due to balancing
properties.

4. B-Trees
What is a B-Tree?

 A B-Tree is a self-balancing search tree that is generalized for multi-way branching.


 In a B-tree, each node can have multiple children, unlike binary trees, where each node has
at most two children.
 B-trees are commonly used in databases and file systems to store large amounts of data on
disk, where nodes can hold many elements to reduce the number of disk accesses.

Properties of B-Trees:

1. Node Size: Each node can store more than one key, and the keys are kept sorted within each
node.
2. Balanced Structure: All leaf nodes are at the same level.
3. Order: A B-tree is defined by its order (m), which dictates the maximum number of children
a node can have.

o Maximum number of children: m


o Minimum number of children: ⌈ m/2⌉

Example of a B-tree (Order 3):

[10 | 20]
/ | \
[5] [15] [25 | 30]

Operations in B-Trees:

 Insertion: Inserting keys into a B-tree might require splitting nodes if they exceed the
maximum number of keys.
 Deletion: Deletion involves merging nodes if necessary to maintain the B-tree properties.
 Search: Efficient due to the multi-level structure and the fact that all leaf nodes are at the
same depth.
5. Applications of Trees
Applications of AVL Trees and Red-Black Trees:

 Database Indexing: Both AVL and Red-Black trees are used in databases for fast indexing and
efficient querying.
 Memory Management: Used in operating systems for managing memory allocations and
deallocations.
 File Systems: Both AVL and Red-Black trees help organize and manage file directories
efficiently.

Applications of B-Trees:

 Database Management: B-trees are widely used in databases for indexing large datasets
stored on disk.
 File Systems: File systems like FAT and NTFS use B-trees to store file metadata and index files
efficiently.

6. Summary
 AVL Trees are self-balancing BSTs where each node maintains a balance factor. They provide
O(log n) search, insertion, and deletion.
 Red-Black Trees are balanced BSTs with additional color properties that guarantee O(log n)
time complexity for search, insert, and delete operations.
 B-Trees are multi-way balanced search trees used for storing large datasets in external
storage (e.g., databases, file systems).
 Advanced trees like AVL, Red-Black, and B-Trees help optimize performance for complex data
management and retrieval tasks.

7. Practice Questions
1. Implement an AVL tree in PHP and perform insertion and search operations.
2. Write a PHP function to perform a left rotation on a node in an AVL tree.
3. Create a simple B-tree implementation to handle insertion and searching.

This lecture covered advanced trees such as AVL trees, Red-Black trees, and B-
trees, their properties, operations, and applications.
Lecture Note: Graphs

1. Introduction to Graphs
What is a Graph?

 A graph is a data structure used to represent relationships between objects.


 It consists of a set of vertices (nodes) and a set of edges (connections) between them.
 Graphs can be directed (edges have a direction) or undirected (edges have no direction).

Graph Terminology:

 Vertex (Node): A point in the graph (e.g., A, B, C).


 Edge (Arc): A connection between two vertices (e.g., A-B).
 Directed Graph (Digraph): A graph where edges have a direction (e.g., A → B).
 Undirected Graph: A graph where edges have no direction (e.g., A -- B).
 Weighted Graph: A graph in which each edge has a weight or cost.
 Unweighted Graph: A graph where all edges are assumed to have equal weight.
 Path: A sequence of edges that connects two vertices.
 Cycle: A path that starts and ends at the same vertex without repeating any edges.

2. Graph Representations
There are two common ways to represent graphs:

a. Adjacency Matrix:

 An adjacency matrix is a 2D array (matrix) where each element matrix[i][j] indicates if


there is an edge from vertex i to vertex j.
 For an undirected graph, the matrix is symmetric, meaning matrix[i][j] =
matrix[j][i].
 For a directed graph, the matrix is not necessarily symmetric.

Example of an Adjacency Matrix:

Consider the graph:


The adjacency matrix is:

Where:

 1 indicates an edge between vertices.


 0 indicates no edge between vertices.

b. Adjacency List:

 An adjacency list is a more space-efficient way to represent graphs.


 It uses a list (or array) of lists, where each list contains the vertices adjacent to the vertex at
that position.

Example of an Adjacency List:

For the same graph:

The adjacency list is:

In this representation:

 Each vertex points to a list of vertices that are directly connected to it.
3. Graph Traversal
Graph traversal refers to the process of visiting all the nodes in a graph. There are two
main types of graph traversal:

a. Breadth-First Search (BFS):

 BFS is an algorithm for traversing or searching graph data structures.


 It starts at a given node (source node) and explores all its neighbors before moving on to
their neighbors.
 BFS explores the graph level by level.

BFS Algorithm:

1. Start from a source node.


2. Visit all adjacent vertices and mark them as visited.
3. Add the unvisited vertices to a queue.
4. Repeat until all vertices are visited.

Example Code (BFS in PHP):


b. Depth-First Search (DFS):

 DFS is an algorithm for traversing or searching graph data structures.


 It starts at the source node and explores as far as possible along each branch before
backtracking.
 DFS can be implemented using recursion or using a stack.

DFS Algorithm:

1. Start from a source node.


2. Visit the node and recursively visit all its neighbors.
3. Mark nodes as visited to avoid revisiting.

Example Code (DFS in PHP):


4. Shortest Path Algorithms
Finding the shortest path between nodes in a graph is an important problem in many
applications, such as networking and route planning. Two popular algorithms to solve
this problem are:

a. Dijkstra's Algorithm (for Weighted Graphs)

 Dijkstra's Algorithm finds the shortest path from a source node to all other nodes in a
weighted graph with non-negative edge weights.
 It works by visiting nodes in increasing order of their distance from the source node.

Dijkstra's Algorithm Steps:

1. Initialize the distance to the source node as 0 and all other nodes as infinity.
2. Mark all nodes as unvisited. Start with the source node.
3. For the current node, consider all its unvisited neighbors and calculate their tentative
distances.
4. Choose the unvisited node with the smallest tentative distance and mark it as visited.
5. Repeat until all nodes are visited or the smallest tentative distance is infinity.

Example Code (Dijkstra's Algorithm in PHP):

class Graph {
private $adjList;

public function __construct() {


$this->adjList = [];
}

public function addEdge($u, $v, $weight) {


$this->adjList[$u][$v] = $weight;
$this->adjList[$v][$u] = $weight; // for undirected graph
}

public function dijkstra($start) {


$distances = [];
$previous = [];
$unvisited = [];

foreach ($this->adjList as $node => $neighbors) {


$distances[$node] = INF;
$previous[$node] = null;
$unvisited[$node] = true;
}
$distances[$start] = 0;

while (!empty($unvisited)) {
$minNode = $this->getMinNode($unvisited, $distances);
unset($unvisited[$minNode]);

foreach ($this->adjList[$minNode] as $neighbor => $weight)


{
$alt = $distances[$minNode] + $weight;
if ($alt < $distances[$neighbor]) {
$distances[$neighbor] = $alt;
$previous[$neighbor] = $minNode;
}
}
}

return $distances;
}

private function getMinNode($unvisited, $distances) {


$minNode = null;
foreach ($unvisited as $node => $val) {
if ($minNode === null || $distances[$node] <
$distances[$minNode]) {
$minNode = $node;
}
}
return $minNode;
}
}
// Example usage
$graph = new Graph();
$graph->addEdge('A', 'B', 1);
$graph->addEdge('A', 'C', 4);
$graph->addEdge('B', 'C', 2);
$graph->addEdge('B', 'D', 5);
$graph->addEdge('C', 'D', 1);
Lecture Note: Sorting Algorithms

Sorting is the process of arranging data in a particular order, typically in ascending or


descending order. Sorting algorithms are fundamental in computer science and are
widely used in a variety of applications such as searching, optimization, and data
analysis.

1. Introduction to Sorting Algorithms


Sorting algorithms are designed to rearrange a collection of elements, typically an
array or list, in a specific order. The choice of sorting algorithm depends on the size of
the dataset, the time complexity, and the space complexity.

Types of Sorting Algorithms:

 Comparison-based Sorting: These algorithms compare elements to decide their order (e.g.,
bubble sort, quicksort).
 Non-comparison-based Sorting: These algorithms do not compare elements, but instead use
other methods, like counting (e.g., counting sort).

In this lecture, we'll focus on the most commonly used comparison-based sorting
algorithms.

2. Bubble Sort
Bubble sort is one of the simplest sorting algorithms. It works by repeatedly swapping
adjacent elements if they are in the wrong order. This process continues until no more
swaps are needed, which means the list is sorted.

Bubble Sort Algorithm:

1. Compare each pair of adjacent elements.


2. If the elements are in the wrong order, swap them.
3. Continue this process for all the elements.
4. Repeat the process until no swaps are made.
Example Code (Bubble Sort in PHP):

Time Complexity:

 Best Case: O(n) (if the array is already sorted)


 Average Case: O(n²)
 Worst Case: O(n²)

3. Insertion Sort
Insertion sort works by dividing the list into a sorted and an unsorted part. It picks
elements from the unsorted part and inserts them into the correct position in the sorted
part.

Insertion Sort Algorithm:

1. Start from the second element of the array.


2. Compare the current element with the previous elements.
3. Insert the current element into its correct position by shifting larger elements.
4. Repeat this until the entire array is sorted.
Example Code (Insertion Sort in PHP):

Time Complexity:

 Best Case: O(n) (if the array is already sorted)


 Average Case: O(n²)
 Worst Case: O(n²)

4. Selection Sort
Selection sort works by repeatedly finding the minimum element from the unsorted
part and swapping it with the first unsorted element.

Selection Sort Algorithm:

1. Find the smallest element in the unsorted part of the array.


2. Swap the smallest element with the first unsorted element.
3. Move the boundary of the sorted part one step to the right.
4. Repeat the process until the entire array is sorted.
Example Code (Selection Sort in PHP):

Time Complexity:

 Best Case: O(n²)


 Average Case: O(n²)
 Worst Case: O(n²)

5. Merge Sort
Merge sort is a divide and conquer algorithm. It divides the array into two halves,
sorts each half recursively, and then merges the sorted halves.

Merge Sort Algorithm:

1. Divide the array into two halves.


2. Recursively sort each half.
3. Merge the sorted halves.
Example Code (Merge Sort in PHP):

Time Complexity:

 Best Case: O(n log n)


 Average Case: O(n log n)
 Worst Case: O(n log n)

6. Quick Sort
Quick sort is another divide and conquer algorithm. It works by selecting a "pivot"
element from the array, partitioning the other elements into two sub-arrays, and
recursively sorting the sub-arrays.

Quick Sort Algorithm:

1. Pick a pivot element.


2. Partition the array into two parts: one with elements less than the pivot and one with
elements greater than the pivot.
3. Recursively sort the two parts.
Example Code (Quick Sort in PHP):

Time Complexity:

 Best Case: O(n log n)


 Average Case: O(n log n)
 Worst Case: O(n²) (when the pivot is not well chosen)

7. Heap Sort
Heap sort is based on the binary heap data structure. It involves building a max-
heap from the input data and then extracting the maximum element from the heap to
sort the array.

Heap Sort Algorithm:

1. Build a max-heap from the input array.


2. Swap the root (max element) with the last element.
3. Restore the heap property by heapifying the reduced heap.
4. Repeat until the array is sorted.
Time Complexity:

 Best Case: O(n log n)


 Average Case: O(n log n)
 Worst Case: O(n log n)

Conclusion
Each sorting algorithm has its strengths and weaknesses. Bubble Sort, Insertion Sort,
and Selection Sort are simple and easy to understand but are inefficient for large
datasets. Merge Sort, Quick Sort, and Heap Sort are more efficient for larger
datasets, with quick sort being the most widely used for practical applications.
Lecture Note: Searching Algorithms

Searching algorithms are fundamental techniques for locating an element within a


data structure. These algorithms help determine whether an element is present and its
position in a collection. Searching is widely used in many applications such as
databases, web searches, and more.

In this lecture, we'll cover some basic and advanced searching algorithms, including
linear search, binary search, and searching algorithms on trees and graphs.

1. Introduction to Searching Algorithms


 Searching: The process of finding a specific element within a collection of data (array, list,
tree, graph, etc.).
 Linear Search: A simple search method that examines each element of the array or list in
sequence.
 Binary Search: An efficient search algorithm that works on sorted arrays or lists by
repeatedly dividing the search interval in half.
 Searching in Trees and Graphs: Specialized methods for searching elements in tree and
graph data structures, such as Depth-First Search (DFS) and Breadth-First Search (BFS).

2. Linear Search
Linear search, also known as sequential search, is the most straightforward searching
algorithm. It checks each element in the list or array one by one until the desired
element is found or all elements are checked.

Linear Search Algorithm:

1. Start from the first element of the array or list.


2. Compare the current element with the target element.
3. If a match is found, return the index.
4. If no match is found by the end of the list, return a signal indicating the target is not present.
Example Code (Linear Search in PHP):

function linearSearch($arr, $target) {


foreach ($arr as $index => $value) {
if ($value == $target) {
return $index; // Element found
}
}
return -1; // Element not found
}
// Example usage$numbers = [10, 20, 30, 40, 50];$target = 30;$result
= linearSearch($numbers, $target);echo $result; // Output: 2 (index
of the target)

Time Complexity:

 Best Case: O(1) (if the target is the first element)


 Worst Case: O(n) (if the target is at the end or not present)

3. Binary Search
Binary search is an efficient search algorithm that works on sorted data structures
(arrays or lists). It divides the search space into half at each step, making it much
faster than linear search, especially for large datasets.

Binary Search Algorithm:

1. Find the middle element of the array.


2. If the middle element is the target, return its index.
3. If the target is smaller than the middle element, search the left half.
4. If the target is larger, search the right half.
5. Repeat this process until the element is found or the search space is exhausted.

Example Code (Binary Search in PHP):

function binarySearch($arr, $target) {


$low = 0;
$high = count($arr) - 1;

while ($low <= $high) {


$mid = floor(($low + $high) / 2);

if ($arr[$mid] == $target) {
return $mid; // Element found
}

if ($arr[$mid] < $target) {


$low = $mid + 1;
} else {
$high = $mid - 1;
}
}

return -1; // Element not found


}
// Example usage$numbers = [10, 20, 30, 40, 50];$target = 30;$result
= binarySearch($numbers, $target);echo $result; // Output: 2 (index
of the target)

Time Complexity:

 Best Case: O(1) (if the target is the middle element)


 Worst Case: O(log n) (since the search space is halved each time)

4. Searching Algorithms on Trees


Depth-First Search (DFS)

DFS is a graph and tree traversal algorithm that explores as far as possible along each
branch before backtracking. It is useful for exploring all nodes of a tree or graph.

DFS Algorithm (Pre-order Traversal):

1. Start at the root node.


2. Visit the node.
3. Recursively visit the left subtree, then the right subtree.
Example Code (DFS in PHP):

function dfs($tree, $target) {


if (empty($tree)) {
return false;
}

if ($tree[0] == $target) {
return true; // Target found
}

// Recur for left and right subtrees


return dfs($tree[1], $target) || dfs($tree[2], $target);
}

// Example usage
$tree = [10, [5, null, null], [15, null, null]]; // [value, left_subtree, right_subtree]
$target = 5;
$result = dfs($tree, $target);
echo $result ? "Found" : "Not Found"; // Output: Found

Time Complexity of DFS:

 Best Case: O(1) (if the element is found at the root)


 Worst Case: O(n) (if the element is not found, or is found in the deepest node)

Breadth-First Search (BFS)

BFS is another tree and graph traversal algorithm, which explores all nodes at the
present depth level before moving on to nodes at the next depth level.

BFS Algorithm:

1. Start at the root node.


2. Explore all the neighbors at the present depth level before moving on to nodes at the next
level.
3. Continue this process until the target is found or all nodes are explored.

Example Code (BFS in PHP):

function bfs($tree, $target) {

if (empty($tree)) {

return false;

}
$queue = [$tree];

while (count($queue) > 0) {

$currentNode = array_shift($queue);

if ($currentNode[0] == $target) {

return true; // Target found

// Add left and right subtrees to the queue

if ($currentNode[1]) {

$queue[] = $currentNode[1];

if ($currentNode[2]) {

$queue[] = $currentNode[2];

return false; // Element not found

// Example usage

$tree = [10, [5, null, null], [15, null, null]];

$target = 5;

$result = bfs($tree, $target);

echo $result ? "Found" : "Not Found"; // Output: Found


Time Complexity of BFS:

 Best Case: O(1) (if the element is found at the root)


 Worst Case: O(n) (if the element is not found, or is found in the deepest node)

5. Searching Algorithms on Graphs


The algorithms used to search for elements in graphs are similar to those used for
trees but differ in that graphs may have cycles and more complex structures.

Depth-First Search (DFS) on Graphs

DFS in graphs is similar to DFS in trees, but you must keep track of visited nodes to
avoid revisiting the same node.

Breadth-First Search (BFS) on Graphs

Similarly, BFS can be used in graphs with the same logic as in trees, ensuring that you
avoid revisiting nodes.

Conclusion
Searching algorithms are essential tools for efficiently finding elements in data
structures. Linear Search is simple but inefficient for large datasets. Binary Search
is more efficient but requires sorted data. For tree and graph structures, DFS and BFS
are widely used, with DFS going deeper into a structure before backtracking and BFS
exploring level by level. Understanding these algorithms helps in building efficient
applications, particularly in areas such as database searching, web searches, and
pathfinding.
Lecture Note: Dynamic Programming
1. Introduction to Dynamic Programming (DP)

Dynamic Programming is a problem-solving approach that breaks down complex


problems into smaller subproblems and solves each subproblem just once, saving its
solution in a table to avoid redundant calculations. This technique is particularly
useful for optimization problems, where finding an optimal solution requires
exploring many possibilities.

Key Concepts:

 Overlapping Subproblems: When the same subproblems are solved multiple times.
 Optimal Substructure: The optimal solution of the problem can be constructed from the
optimal solutions of its subproblems.

2. Two Approaches in Dynamic Programming

 Memoization (Top-Down Approach): This approach solves the problem recursively and
stores the results of subproblems to avoid redundant calculations.
 Tabulation (Bottom-Up Approach): This approach solves the problem iteratively, filling up a
table based on previously computed values.

3. Example Problems Solved Using Dynamic Programming

3.1 Fibonacci Series using Memoization

The Fibonacci sequence is a series of numbers where each number is the sum of the
two preceding ones, starting from 0 and 1.

Recursive Formula: F(n)=F(n−1)+F(n−2) With base cases: F(0) = 0,F(1) = 1

Memoization (Top-Down) Code:

function fibonacci($n, &$memo = []) {


if ($n <= 1) {
return $n;
}

// Check if the result is already computed


if (isset($memo[$n])) {
return $memo[$n];
}

// Compute and store the result in the memo array


$memo[$n] = fibonacci($n-1, $memo) + fibonacci($n-2, $memo);

return $memo[$n];
}
// Example usageecho fibonacci(10); // Output: 55

This code memoizes the results of the Fibonacci function to avoid recalculating the
same values multiple times.

3.2 Knapsack Problem (0/1 Knapsack)

The 0/1 Knapsack problem involves selecting items with given weights and values,
such that the total weight does not exceed a given limit, and the total value is
maximized.

Problem Statement:

 We are given a set of items, each with a weight and a value.


 We need to determine the maximum value we can carry in the knapsack without exceeding
its weight capacity.

Recursive Formula:

K(n,W)=max⁡(K(n−1,W),value[n−1]+K(n−1,W−weight[n−1]))K(n, W) =
\max(K(n-1, W), \text{value}[n-1] + K(n-1, W - \text{weight}[n-
1]))K(n,W)=max(K(n−1,W),value[n−1]+K(n−1,W−weight[n−1]))

Where:

 K(n,W)K(n, W)K(n,W) is the maximum value for the first n items and weight W.

Memoization (Top-Down) Code:

function knapsack($weights, $values, $capacity, $n, &$memo = []) {


// Base case: no items or no capacity
if ($n == 0 || $capacity == 0) {
return 0;
}

// Check if the result is already computed


if (isset($memo[$n][$capacity])) {
return $memo[$n][$capacity];
}

// If weight of the nth item is more than the capacity, skip it


if ($weights[$n-1] > $capacity) {
$memo[$n][$capacity] = knapsack($weights, $values, $capacity,
$n-1, $memo);
} else {
// Take the maximum of including or excluding the nth item
$memo[$n][$capacity] = max(
knapsack($weights, $values, $capacity, $n-1, $memo),
$values[$n-1] + knapsack($weights, $values, $capacity -
$weights[$n-1], $n-1, $memo)
);
}

return $memo[$n][$capacity];
}
// Example usage$weights = [2, 3, 4, 5];$values = [3, 4, 5,
6];$capacity = 5;$n = count($weights);
echo knapsack($weights, $values, $capacity, $n); // Output: 7

In this solution, we use memoization to store the results of subproblems and avoid
redundant calculations.

4. Summary of Dynamic Programming Techniques

 Memoization: Saves the results of subproblems to avoid redundant calculations. This is a


top-down approach that solves the problem recursively.
 Tabulation: A bottom-up approach that starts with smaller subproblems and builds up to
solve the entire problem iteratively.

5. Time Complexity of Dynamic Programming Solutions

 Fibonacci with Memoization: O(n) because we calculate each Fibonacci number once.
 Knapsack Problem with Memoization: O(n * W), where n is the number of items and W is
the maximum weight capacity of the knapsack.

6. Applications of Dynamic Programming

 Optimization Problems: Problems like shortest path, longest common subsequence, and
sequence alignment in bioinformatics.
 Game Theory: Solving problems like optimal strategies in two-player games.
 Resource Allocation: Problems like job scheduling, partition problems, and others requiring
maximizing or minimizing resources.
Lecture Note : Greedy Algorithms
12.1 Introduction to Greedy Algorithms

Greedy algorithms are a class of algorithms that follow the problem-solving heuristic
of making the locally optimal choice at each stage with the hope of finding the global
optimum. The basic idea is to choose the best possible option at each step, without
reconsidering previous choices. Greedy algorithms are efficient, but they don’t always
lead to the optimal solution for every problem. They work best when the problem has
the greedy-choice property (local choices lead to global solutions) and optimal
substructure (optimal solutions to subproblems can be combined to form an optimal
solution to the overall problem).

Key Characteristics:

 Make a series of choices by picking the best option available at each step.
 Do not reconsider or backtrack (thus faster than exhaustive search algorithms).
 Do not guarantee an optimal solution for all problems.

12.2 Examples of Greedy Algorithms

12.2.1 Coin Change Problem (Greedy Approach)

In the coin change problem, given a set of coin denominations and a total amount of
money, the goal is to determine the minimum number of coins needed to make the
total amount.

Greedy Approach:

 Always pick the largest coin denomination that does not exceed the remaining amount.
 Repeat until the amount is reduced to zero.

Coin Change Problem Code:

function coinChange(coins, amount) {

let count = 0;

for (let coin of coins) {

if (coin <= amount) {


count += Math.floor(amount / coin); // Number of coins
of this denomination

amount %= coin; // Update the amount to make

return amount === 0 ? count : -1; // Return -1 if change cannot be


made

// Example usage

const coins = [25, 10, 5, 1]; // Coin denominations (in cents)

const amount = 63;

console.log(coinChange(coins, amount)); // Output: 6 (2 quarters, 1


dime, 1 nickel, 3 pennies)

Explanation:

In this approach, the function always chooses the largest coin denomination that
doesn't exceed the remaining amount, reducing the problem to smaller subproblems.

12.2.2 Job Scheduling Problem

The job scheduling problem involves scheduling a set of jobs, each with a start time,
end time, and profit. The goal is to select a subset of jobs that don’t overlap,
maximizing the total profit.

Greedy Approach:

 Sort the jobs by their finish time.


 Select jobs in order of their finish time, ensuring that the selected job does not overlap with
the previously selected job.

Job Scheduling Problem Code:

function jobScheduling(jobs) {

// Sort jobs by their finish time

jobs.sort((a, b) => a[1] - b[1]);


let selectedJobs = [];

let lastEndTime = 0;

for (let job of jobs) {

if (job[0] >= lastEndTime) { // Job start time >= last selected job's end
time

selectedJobs.push(job);

lastEndTime = job[1]; // Update last end time

return selectedJobs;

// Example usage

const jobs = [[1, 4, 50], [2, 6, 60], [5, 7, 120], [3, 8, 30]]; // [start, finish,
profit]

console.log(jobScheduling(jobs)); // Selected jobs with maximum profit and no


overlapExplanation:

The jobs are sorted based on their finish time, and jobs are selected greedily by
checking if they do not overlap with the previously selected job.
Lecture Note:Algorithm Complexity
Analysis
13.1 Introduction to Algorithm Complexity Analysis

Algorithm complexity analysis is the process of determining the efficiency of an


algorithm in terms of time and space. The primary goal of this analysis is to evaluate
how well an algorithm performs as the input size increases. It helps to understand the
resources an algorithm will need to solve a problem.

There are two main types of complexity:

 Time Complexity: The amount of time an algorithm takes to run as a function of the input
size.
 Space Complexity: The amount of memory space an algorithm uses as a function of the input
size.

13.2 Time Complexity

Time complexity gives an estimate of how long an algorithm takes to run based on the
size of the input. It is often expressed using Big O notation, which describes the upper
bound of an algorithm's growth rate. Common time complexities include:

 O(1): Constant time – the algorithm’s running time does not depend on the input size.
 O(n): Linear time – the running time grows linearly with the input size.
 O(log n): Logarithmic time – the running time grows logarithmically with the input size.
 O(n^2): Quadratic time – the running time grows quadratically with the input size.

Examples of Time Complexity:

Constant Time (O(1)): Accessing an element in an array.

function getElement(arr, index) {

return arr[index]; // O(1) operation

console.log(getElement([10, 20, 30], 1)); // Output: 20

Linear Time (O(n)): Iterating through an array

function printArray(arr) {

for (let i = 0; i < arr.length; i++) {


console.log(arr[i]); // O(n) operation

printArray([1, 2, 3, 4, 5]); // Output: 1 2 3 4 5

Quadratic Time (O(n^2)): Nested loops.

function bubbleSort(arr) {

let n = arr.length;

for (let i = 0; i < n; i++) {

for (let j = 0; j < n - i - 1; j++) {

if (arr[j] > arr[j + 1]) {

// Swap the elements

[arr[j], arr[j + 1]] = [arr[j + 1], arr[j]];

let arr = [5, 3, 8, 4, 2];

bubbleSort(arr);

console.log(arr); // Sorted array: [2, 3, 4, 5, 8]


Space Complexity

Space complexity refers to the amount of memory space required by an algorithm as


the input size increases. It includes the space for:

 Input data
 Auxiliary space (extra space used by the algorithm)

Space Complexity Notations:

 O(1): Constant space – the algorithm uses a fixed amount of space regardless of the input
size.
 O(n): Linear space – the algorithm's space requirement grows linearly with the input size.

13.4 Analyzing Best, Worst, and Average Cases

 Best Case: The minimum amount of time or space an algorithm will take for the best possible
input.
 Worst Case: The maximum time or space the algorithm will take for the worst possible input.
 Average Case: The expected time or space the algorithm will take over all possible inputs.

For example, consider the linear search algorithm:

 Best Case (O(1)): The target element is found at the first index.
 Worst Case (O(n)): The target element is not found, or it’s at the last index.
 Average Case (O(n)): On average, the algorithm will search half of the list.

13.5 Conclusion

Algorithm complexity analysis helps to understand the efficiency of an algorithm,


guiding developers in choosing the right algorithm for the task at hand. By focusing
on time and space complexities, one can predict how an algorithm will perform as the
input size grows

You might also like