Module 1 UQ
Module 1 UQ
1. Differentiate between top down and bottom up approach for problem solving?
Time Complexity
Time complexity is a type of computational complexity that describes the time required to
execute an algorithm.
The input size has a strong relationship with time complexity in data structure. As the size of
the input increases, so does the runtime, or the amount of time it takes the algorithm to run.
Here is an example.
Assume you have a set of numbers S= (10, 50, 20, 15, 30)
There are numerous algorithms for sorting the given numbers. However, not all of them are
effective. To determine which is the most effective, you must perform computational analysis
on each algorithm.
Space Complexity
The amount of memory used by a program to execute it is represented by its space complexity.
Space complexity refers to the total amount of memory space used by an algorithm/program,
including the space of input values for execution. Calculate the space occupied by variables in
an algorithm/program to determine space complexity.
However, people frequently confuse Space-complexity with auxiliary space. Auxiliary space is
simply extra or temporary space, and it is not the same as space complexity. To put it another
way,
The best algorithm/program should have a low level of space complexity. The less space
required, the faster it executes.
From the definition: O (g(n)) = { f(n) : there exist positive constant c and n0 such that
f(n) <= c*g(n) for all n >= n0 }
It is the most widely used notation for Asymptotic analysis. It specifies the upper bound of
a function, i.e., the maximum time required by an algorithm or the worst-case time
complexity. In other words, it returns the highest possible output value (big-O) for a given
input.
4. What is frequency count? Explain with an example.
The frequency count method is one of the methods to analyze the Time complexity of
an algorithm. In this method, we count the number of times each instruction is executed.
Based on that we will calculate the Time Complexity.
The frequency count method is also called as step count method.
Example
The reason binary search has a time complexity of O(log2 n) is because it divides the input size
by 2 in each step. This halving process can be represented by a logarithmic function. In a list of
size n, binary search can find an element in at most log₂(n) steps.
Verification is the process of checking that software achieves its goal without any bugs. It is the
process to ensure whether the product that is developed is right or not. It verifies whether the
developed product fulfills the requirements that we have. Verification is static testing.
9. What are the properties that an algorithm should have
An algorithm is a procedure used for solving a problem or performing a computation.
Properties are
1. 1) Input
There are more quantities that are extremely supplied.
2. 2) Output
At least one quantity is produced.
3. 3) Definiteness
Each instruction of the algorithm should be clear and unambiguous.
4. 4) Finiteness
The process should be terminated after a finite number of steps.
5. 5) Effectiveness
Every instruction must be basic enough to be carried out theoretically or by using paper and
pencil.
The best-case performance for the Linear Search algorithm is when the search item
appears at the beginning of the list and is O(1). The worst-case performance is when
the search item appears at the end of the list or not at all. This would require N
comparisons, hence, the worse case is O(N).
11. Derive the Big O notation for f(n) = n2+2n+5.
12. Write an algorithm/pseudo code for linear search and mention the best case
and worstcase time complexity of Linear Search algorithm?
Linear Search (Array A, Value x)
Step 1: Set i to 1
Step 2: if i > n, then jump to step 7
Step 3: if A[i] = x then jump to step 6
Step 4: Set i to i + 1
Step 5: Go to step 2
Step 6: Print element x found at index i and jump to step 8
Step 7: Print element not found
Step 8: Exit
The best-case performance for the Linear Search algorithm is when the search item
appears at the beginning of the list and is O(1). The worst-case performance is when
the search item appears at the end of the list or not at all. This would require N
comparisons, hence, the worse case is O(N).
13. What is the purpose of calculating frequency count? Compute the frequency
count of thefollowing code fragment.
for(i=0;i<n;i++)
for(j=0;j<n;j++)
printf(“%d”,a[i][j]);
The frequency count method is one of the methods to analyze the Time complexity of
an algorithm. In this method, we count the number of times each instruction is executed.
Based on that we will calculate the Time Complexity.
System Life Cycle, Algorithms, Performance Analysis, Space Complexity, Time Complexity,
Asymptotic Notation, Complexity Calculation of Simple Algorithms.
SYSTEM LIFE CYCLE (SLC)
Good programmers regard large scale computer programs as systems that contain many
complex interacting parts. (Systems: Large Scale Computer Programs.)
As systems, these programs undergo a development process called System life cycle.(
SLC : Development Process of Programs)
Different Phases of System Life Cycle
1. Requirements
2. Analysis
3. Design
4. Refinement and coding
5. Verification
1. Requirement Phase:
All programming projects begin with a set of specifications that defines the purpose of
that program.
Requirements describe the information that the programmers are given (input) and the
results (output) that must be produced.
Frequently the initial specifications are defined vaguely and we must develop rigorous
input and output descriptions that include all cases.
2. Analysis Phase
In this phase the problem is break down into manageable pieces.
There are two approaches to analysis:-bottom up and top down.
Bottom up approach is an older, unstructured strategy that places an early emphasis on
coding fine points. Since the programmer does not have a master plan for the project,
the resulting program frequently has many loosely connected, error ridden segments.
Top down approach is a structured approach divide the program into manageable
segments.
This phase generates diagrams that are used to design the system.
Several alternate solutions to the programming problem are developed and compared
during this phase
3. Design Phase
This phase continues the work done in the analysis phase.
The designer approaches the system from the perspectives of both data objects that the
program needs and the operations performed on them.
The first perspective leads to the creation of abstract data types while the second
requires the specification of algorithms and a consideration of algorithm design
strategies.
Ex: Designing a scheduling system for university
Data objects: Students, courses, professors etc
Operations: insert, remove search etc
ie. We might add a course to the list of university courses, search for the courses taught
by some professor etc.
Since abstract data types and algorithm specifications are language independent.
We must specify the information required for each data object and ignore coding details.
Ex: Student object should include name, phone number, social security number etc.
5. Verification Phase
This phase consists of
developing correctness proofs for the program
Testing the program with a variety of input data.
Removing errors.
Correctness of Proofs
Programs can be proven correct using proofs.(like mathematics theorem)
Proofs are very time consuming and difficult to develop for large projects.
Scheduling constraints prevent the development of complete sets of proofs for a
larger system.
However, selecting algorithm that have been proven correct can reduce the
number of errors.
Testing
If done properly, the correctness of proofs and system test will indicate erroneous
code.
Removal of errors depends on the design and code.
While debugging large undocumented program written in ‘spaghetti’ code, each
corrected error possibly generates several new errors.
Debugging a well documented program that is divided into autonomous units
that interact through parameters is far easier. This especially true if each unit is
tested separately and then integrated into system.
15. How the performance of an algorithm is evaluated? Explain the best, worst
and averagecase analysis of an algorithm with the help of an example.
The performance of an algorithm is evaluated by analyzing its time and space complexity. Time
complexity refers to how long an algorithm takes to run, while space complexity refers to how
much memory an algorithm uses.
One way to analyze an algorithm's performance is to use best, worst, and average case analysis
Best case is the function which performs the minimum number of steps on input data of n
elements. Worst case is the function which performs the maximum number of steps on input
data of size n. Average case is the function which performs an average number of steps on input
data of n elements.
16. Write an algorithm to find the number of occurrence of each element in an
array andcalculate the frequency count of the algorithm.
for (i = 0; i < n; i++)
{
Count = 1;
for(j = i + 1; j < n; j++)
{
if(a[i] == a[j]) //Check for duplicate elements
{
Count++;
Freq[j] = 0; /* Make sure not to count frequency of same element again
}
}
if(Freq[i] != 0) /* If frequency of current element is not counted */
{
Freq[i] = Count;
}
}
In this two nested group is used so the time complexity is O(n2)
17. Describe the different notations used to describe the asymptotic running
time of analgorithm.
Asymptotic notations are mathematical tools used to analyze the performance of
algorithms by describing how an algorithm's running time changes as the input size
increases. They are also used to compare the performance of multiple algorithms.
There are mainly three asymptotic notations:
Big Omega
The execution time serves as a lower bound on the algorithm’s time complexity.
Let g and f be the function from the set of natural numbers to itself. The
function f is said to be Ω(g), if there is a constant c > 0 and a natural number
n0 such that c*g(n) ≤ f(n) for all n ≥ n0
Big Theta
Theta notation encloses the function from above and below. Since it represents the upper and
the lower bound of the running time of an algorithm, it is used for analyzing the average-case
complexity of an algorithm.
Let g and f be the function from the set of natural numbers to itself. The function f is said to be
Θ(g), if there are constants c1, c2 > 0 and a natural number n0 such that c1* g(n) ≤ f(n) ≤ c2 *
g(n) for all n ≥ n0
18. Calculate the frequency count of the statement x = x+1; in the following code
segment
for (i = 0; i< n; i++)
for (j = 0; j< n; j++)
x = x + 1;