0% found this document useful (0 votes)
1 views11 pages

Unit 1-1

The document provides an overview of algorithms, detailing their characteristics, advantages, and disadvantages. It outlines the systematic steps for writing algorithms, including problem understanding, analysis, design, implementation, testing, optimization, and documentation. Additionally, it covers concepts of time and space complexity, algorithm analysis, and specific algorithm strategies such as divide and conquer.

Uploaded by

smuthukarpagam
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
1 views11 pages

Unit 1-1

The document provides an overview of algorithms, detailing their characteristics, advantages, and disadvantages. It outlines the systematic steps for writing algorithms, including problem understanding, analysis, design, implementation, testing, optimization, and documentation. Additionally, it covers concepts of time and space complexity, algorithm analysis, and specific algorithm strategies such as divide and conquer.

Uploaded by

smuthukarpagam
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 11

DESIGN AND ANALYSIS OF ALGORITHMS

Unit – 1

Algorithm:

An algorithm is a well-defined, step-by-step procedure designed to solve a specific problem or


perform a particular task. In computer science, algorithms are fundamental to programming and
data processing, enabling computers to execute tasks efficiently and effectively.

Essential Characteristics of an Algorithm

1. Input

o Accepts zero or more well-defined inputs drawn from specified sets.

2. Output

o Produces at least one output with a defined relationship to the input.

3. Definiteness

o Each step is precise, clear, and unambiguous—no vagueness allowed.

4. Finiteness

o Guaranteed to terminate after a finite number of steps.

5. Effectiveness

o Steps are basic enough to be executed in principle by a human with pencil and paper
within finite time

Advantages of Algorithms:

 It is easy to understand.

 An algorithm is a step-wise representation of a solution to a given problem.

 In an Algorithm the problem is broken down into smaller pieces or steps hence, it is easier
for the programmer to convert it into an actual program.

Disadvantages of Algorithms:

 Writing an algorithm takes a long time so it is time-consuming.

 Understanding complex logic through algorithms can be very difficult.

 Branching and Looping statements are difficult to show in Algorithms

Steps to write the algorithm:


write an algorithm in DAA (Design and Analysis of Algorithms), the process involves understanding
the problem, analyzing it, designing the solution, implementing it, testing it, optimizing it, and
documenting it. This systematic approach helps create efficient and effective algorithms.
Here's a more detailed breakdown of the steps:

1. Understand the Problem:

 Clearly define the problem you are trying to solve.


 Identify the inputs and expected outputs.
 Consider constraints and limitations of the problem.
 Clarify any ambiguities or assumptions about the problem.

2. Analyze the Problem:

 Determine the problem's characteristics and properties.


 Identify suitable algorithmic design techniques.
 Consider factors like time and space complexity.
 Think about potential optimizations.

3. Design the Algorithm:

 Choose an appropriate algorithm design technique (e.g., divide and conquer, dynamic
programming, greedy algorithms).
 Break down the problem into smaller subproblems if necessary.
 Define the steps of the algorithm in a clear and concise manner.
 Consider how the algorithm will handle various input cases.

4. Implement the Algorithm:

 Translate the designed algorithm into code using a programming language.


 Pay attention to code readability, maintainability, and clarity.
 Choose appropriate data structures to represent the problem and the algorithm's data.

5. Test the Algorithm:

 Thoroughly test the algorithm with various inputs, including edge cases and boundary
conditions.
 Identify and fix any errors or bugs.
 Validate that the algorithm produces the correct output for all valid inputs.
6. Optimize the Algorithm:

 Analyze the algorithm's performance, focusing on time and space complexity.


 Identify potential bottlenecks and areas for improvement.
 Apply optimization techniques to reduce the algorithm's runtime or memory usage.

7. Document the Algorithm:

 Create documentation that explains the algorithm's purpose, inputs, outputs, and
implementation details.
 Include information about the algorithm's time and space complexity.
 Document any assumptions, constraints, or limitations of the algorithm.

Specification of algorithm, Recursive algorithms:


 Example 1: Algorithm for calculating factorial value of a number
 Step 1: a number n is inputted
 Step 2: variable final is set as 1
 Step 3: final<= final * n
 Step 4: decrease n
 Step 5: verify if n is equal to 0
 Step 6: if n is equal to zero, goto step 8 (break out of loop)
 Step 7: else goto step 3
 Step 8: the result final is printed

Recursive Algorithms
A recursive algorithm calls itself which generally passes the return value as a parameter to the
algorithm again. This parameter indicates the input while the return value indicates the output.
Recursive algorithm is defined as a method of simplification that divides the problem into sub-
problems of the same nature. The result of one recursion is treated as the input for the next
recursion. The repletion is in the self-similar fashion manner. The algorithm calls itself with smaller
input values and obtains the results by simply accomplishing the operations on these smaller values.
Generation of factorial, Fibonacci number series are denoted as the examples of recursive
algorithms.
Example: Writing factorial function using recursion

intfactorialA(int n)

return n * factorialA(n-1);

Time Complexity:

 Definition:
Time complexity quantifies the amount of time an algorithm takes to run as a function of the length
of the input.

 Measurement:

It's often expressed using Big O notation (e.g., O(n), O(n log n), O(1)), which describes the upper
bound of the growth rate.

 Examples:

 O(1) (Constant): The algorithm takes the same amount of time regardless of the
input size (e.g., accessing an element in an array by index).

 O(n) (Linear): The time increases linearly with the input size (e.g., iterating through a
list once).

 O(log n) (Logarithmic): The time increases logarithmically with the input size (e.g.,
binary search).

 O(n log n) (Loglinear): The time increases with both linear and logarithmic factors
(e.g., efficient sorting algorithms like merge sort and quicksort).

 O(n2) (Quadratic): The time increases quadratically with the input size (e.g., nested
loops iterating through a list).

 Importance:

Understanding time complexity helps in choosing the most efficient algorithm for a given task,
especially when dealing with large datasets.

Space Complexity:

Definition:

Space complexity quantifies the amount of memory space an algorithm requires to run as a function
of the input size.

 Measurement:

Like time complexity, it's often expressed using Big O notation.

 Components:

Space complexity includes the space used by the input, output, and any auxiliary space (temporary
variables, data structures).

 Examples:

 O(1) (Constant): The algorithm uses a fixed amount of memory regardless of the
input size (e.g., simple calculations).

 O(n) (Linear): The memory usage grows linearly with the input size (e.g., storing an
array of n elements).

 O(log n) (Logarithmic): The memory usage grows logarithmically with the input size
(e.g., recursive algorithms with a limited recursion depth).
 O(n2) (Quadratic): The memory usage grows quadratically with the input size (e.g., a
matrix of n x n elements).

 Importance:

Understanding space complexity is crucial for managing memory usage, especially in resource-
constrained environments.

Why Analysis of Algorithms is important?


 To predict the behaviour of an algorithm without implementing it on a specific computer.
 It is much more convenient to have simple measures for the efficiency of an algorithm than to
implement the algorithm and test the efficiency every time a certain parameter in the underlying
computer system changes.
 It is impossible to predict the exact behaviour of an algorithm. There are too many influencing
factors.
 The analysis is thus only an approximation; it is not perfect.
 More importantly, by analyzing different algorithms, we can compare them to determine the best
one for our purpose.
Types of Algorithm Analysis:
1. Best case
2. 2. Worst case
3. 3. Average case
 Best case: Define the input for which algorithm takes less time or minimum time. In the
best case calculate the lower bound of an algorithm.
Example: In the linear search when search data is present at the first location of large data
then the best case occurs.
 Worst Case: Define the input for which algorithm takes a long time or maximum time. In
the worst calculate the upper bound of an algorithm.
Example: In the linear search when search data is not present at all then the worst case
occurs.
 Average case: In the average case take all random inputs and calculate the computation
time for all inputs. And then we divide it by the total number of inputs.
Average case = all random case time / total no of case

1. Asymptotic Notation

Asymptotic notation is the process of calculating the execution time of an algorithm and also
compare the efficiency of various algorithm. We can calculate the best, average and worst-case
scenarios for an algorithm by using asymptotic notation.

It is lesser execution time, the best performance of an algorithm. Asymptotic notation is used to
express the execution time of an algorithm and also to compare the efficiency of various
algorithm.

Time complexity:

Computing the real running time of a process is not feasible. The time taken for the execution of
any process is dependent on the size of the input.

 If the input size ‘n’, then f(n) is a function of ‘n’ denoting the time complexity.

They are three types of execution.


1. Big O Notation (O) – upper bound
2. Omega Notation (Ω) – lower bound
3. Theta Notation (θ)

1.Big O Notation (O)

Big - Oh notation is used to define the upper bound of an algorithm in terms of Time
Complexity.
That means Big - Oh notation always indicates the maximum time required by an algorithm
for all input values. That means Big - Oh notation describes the worst case of an algorithm
time complexity.
Big - Oh Notation can be defined as follows...

Consider function f(n) as time complexity of an algorithm and g(n) is the most significant
term.
If f(n) <= C g(n) for all n >= n0, C > 0 and n0 >= 1.
Then we can represent f(n) as O(g(n)).

In above graph after a particular input value n0, always C g(n) is greater than f(n) which
indicates the algorithm's upper bound.

2. Omega Notation (Ω)


Big - Omega notation is used to define the lower bound of an algorithm in terms of Time
Complexity.
That means Big-Omega notation always indicates the minimum time required by an algorithm for
all input values. That means Big-Omega notation describes the best case of an algorithm time
complexity.
Big - Omega Notation can be defined as follows...

Consider function f(n) as time complexity of an algorithm and g(n) is the most significant term. If f(n)
>= C g(n) for all n >= n0, C > 0 and n0 >= 1. Then we can represent f(n) as Ω(g(n)).

f(n) = Ω(g(n))
Consider the following graph drawn for the values of f(n) and C g(n) for input (n) value on X-Axis and
time required is on Y-Axis
In above graph after a particular input value n0, always C g(n) is less than f(n) which indicates the
algorithm's lower bound.

3. Theta Notation (θ)

Big - Theta notation is used to define the average bound of an algorithm in terms of Time
Complexity.
That means Big - Theta notation always indicates the average time required by an algorithm
for all input values. That means Big - Theta notation describes the average case of an
algorithm time complexity.
Big - Theta Notation can be defined as follows...

Consider function f(n) as time complexity of an algorithm and g(n) is the most significant
term.
If C1 g(n) <= f(n) <= C2 g(n) for all n >= n0, C1 > 0, C2 > 0 and n0 >= 1.
Then we can represent f(n) as Θ(g(n)).

f(n) = Θ(g(n))
Consider the following graph drawn for the values of f(n) and C g(n) for input (n) value on X-
Axis and time required is on Y-Axis

In above graph after a particular input value n0, always C1 g(n) is less than f(n) and C2 g(n) is
greater than f(n) which indicates the algorithm's average bound.

Divide and Conquer Algorithm


A divide and conquer algorithm is a strategy of solving a large problem by

1. breaking the problem into smaller sub-problems

2. solving the sub-problems, and

3. combining them to get the desired output.

How Divide and Conquer Algorithms Work?

Here are the steps involved:

1. Divide: Divide the given problem into sub-problems using recursion.

2. Conquer: Solve the smaller sub-problems recursively. If the subproblem is small


enough, then solve it directly.

3. Combine: Combine the solutions of the sub-problems that are part of the recursive
process to solve the actual problem
Examples of Divide and Conquer Algorithm:
1. Merge Sort
2. Quick Sort
3. Selection Sort
4. Strassen's algorithm

Merge Sort:

Merge Sort is one of the most popular sorting algorithms that is based on the principle
of Divide and Conquer Algorithm.

Here, a problem is divided into multiple sub-problems. Each sub-problem is solved


individually. Finally, sub-problems are combined to form the final solution.

Example:

You might also like