0% found this document useful (0 votes)
4 views

Lecture -3 Study Notes

An algorithm is a systematic procedure for solving problems, characterized by input, output, definiteness, finiteness, and effectiveness. Various design techniques include Divide and Conquer, Dynamic Programming, Greedy Algorithms, Backtracking, and Brute Force, each with specific applications. The efficiency of an algorithm is evaluated through time and space complexity, with trade-offs between time and space usage and the distinction between exact and approximate solutions.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views

Lecture -3 Study Notes

An algorithm is a systematic procedure for solving problems, characterized by input, output, definiteness, finiteness, and effectiveness. Various design techniques include Divide and Conquer, Dynamic Programming, Greedy Algorithms, Backtracking, and Brute Force, each with specific applications. The efficiency of an algorithm is evaluated through time and space complexity, with trade-offs between time and space usage and the distinction between exact and approximate solutions.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 2

Topic: Algorithm and Efficiency of an Algorithm

Introduction to Algorithms

An algorithm is a step-by-step procedure or formula for solving a problem. It consists of well-defined instructions that
take an input, process it, and produce an output.

Characteristics of an Algorithm:

1. Input: One or more inputs are provided.


2. Output: It must produce at least one output.
3. Definiteness: Each step must be precisely defined.
4. Finiteness: It must have a finite number of steps.
5. Effectiveness: Each step should be simple and feasible.

Algorithm Design Techniques

1. Divide and Conquer: Breaking a problem into subproblems (e.g., Merge Sort, Quick Sort).
2. Dynamic Programming: Solving overlapping subproblems using memoization (e.g., Fibonacci, Knapsack
problem).
3. Greedy Algorithms: Making locally optimal choices (e.g., Kruskal’s Algorithm, Dijkstra’s Algorithm).
4. Backtracking: Searching for a solution by exploring possibilities recursively (e.g., N-Queens problem, Sudoku
Solver).
5. Brute Force: Trying all possible solutions (e.g., String Matching, Traveling Salesman Problem).

Efficiency of an Algorithm

The efficiency of an algorithm is measured by the resources it requires for execution, primarily time complexity and
space complexity.

1. Time Complexity

Time complexity measures the number of operations an algorithm performs as a function of input size n.

Asymptotic Notations:

 Big-O (O): Upper bound on growth rate (worst case scenario).


 Omega (Ω): Lower bound (best case scenario).
 Theta (Θ): Tight bound (average case scenario).

Common Time Complexities:

Complexity Notation Example


Constant O(1) Accessing an array element
Logarithmic O(log n) Binary Search
Linear O(n) Linear Search
Linearithmic O(n log n) Merge Sort, Quick Sort
Quadratic O(n²) Bubble Sort, Insertion Sort
Cubic O(n³) Matrix Multiplication
Exponential O(2ⁿ) Recursive Fibonacci
Factorial O(n!) Traveling Salesman Problem
2. Space Complexity

Space complexity refers to the amount of memory required by an algorithm, including input storage and auxiliary storage.

 O(1): Constant space (e.g., swapping two numbers).


 O(n): Linear space (e.g., storing an array).
 O(n²): Quadratic space (e.g., adjacency matrix for a graph).

Trade-offs in Algorithm Efficiency

1. Time vs. Space Trade-off: Reducing time complexity may increase space usage and vice versa.
2. Exact vs. Approximate Solutions: Some problems are too complex to solve exactly, requiring approximate
algorithms (e.g., heuristic search in AI).

You might also like