0% found this document useful (0 votes)
28 views

Module 1 UQ

Uploaded by

pintonsebastian1
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
28 views

Module 1 UQ

Uploaded by

pintonsebastian1
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 13

Module 1

1. Differentiate between top down and bottom up approach for problem solving?

Write any 3 difference for 3 Marks questions

2. What do you mean by space complexity and time complexity of an algorithm?

Time Complexity
Time complexity is a type of computational complexity that describes the time required to
execute an algorithm.
The input size has a strong relationship with time complexity in data structure. As the size of
the input increases, so does the runtime, or the amount of time it takes the algorithm to run.

Here is an example.

Assume you have a set of numbers S= (10, 50, 20, 15, 30)

There are numerous algorithms for sorting the given numbers. However, not all of them are
effective. To determine which is the most effective, you must perform computational analysis
on each algorithm.
Space Complexity
The amount of memory used by a program to execute it is represented by its space complexity.
Space complexity refers to the total amount of memory space used by an algorithm/program,
including the space of input values for execution. Calculate the space occupied by variables in
an algorithm/program to determine space complexity.

However, people frequently confuse Space-complexity with auxiliary space. Auxiliary space is
simply extra or temporary space, and it is not the same as space complexity. To put it another
way,

Auxiliary space + space use by input values = Space Complexity

The best algorithm/program should have a low level of space complexity. The less space
required, the faster it executes.

3. Define Big-O notation. Derive Big-O notation for 5n3+2n2+3n.

From the definition: O (g(n)) = { f(n) : there exist positive constant c and n0 such that
f(n) <= c*g(n) for all n >= n0 }
It is the most widely used notation for Asymptotic analysis. It specifies the upper bound of
a function, i.e., the maximum time required by an algorithm or the worst-case time
complexity. In other words, it returns the highest possible output value (big-O) for a given
input.
4. What is frequency count? Explain with an example.
The frequency count method is one of the methods to analyze the Time complexity of
an algorithm. In this method, we count the number of times each instruction is executed.
Based on that we will calculate the Time Complexity.
The frequency count method is also called as step count method.
Example

Therefore, the function f(n) can be expressed as 1 + (n + 1) + n, simplifying to 2n + 2,


where n represents the maximum time the algorithm takes to execute. Consequently, the
time complexity is denoted as O(n).
5. What are asymptotic notations? Give examples
Asymptotic notations are mathematical tools used to analyze the performance of
algorithms by describing how an algorithm's running time changes as the input size
increases. They are also used to compare the performance of multiple algorithms.
There are mainly three asymptotic notations:

1.Big-O Notation (O-notation)

2.Omega Notation (Ω-notation)

3. Theta Notation (Θ-notation)


6. What is the difference between algorithm and pseudocode?

Write any 3 differences for 3 Marks


7.The time complexity of binary search algorithm is O(log n) justify the statement.

The reason binary search has a time complexity of O(log2 n) is because it divides the input size
by 2 in each step. This halving process can be represented by a logarithmic function. In a list of
size n, binary search can find an element in at most log₂(n) steps.

8. What is the significance of Verification in System Life Cycle

Verification is the process of checking that software achieves its goal without any bugs. It is the
process to ensure whether the product that is developed is right or not. It verifies whether the
developed product fulfills the requirements that we have. Verification is static testing.
9. What are the properties that an algorithm should have
An algorithm is a procedure used for solving a problem or performing a computation.
Properties are

1. 1) Input
There are more quantities that are extremely supplied.
2. 2) Output
At least one quantity is produced.
3. 3) Definiteness
Each instruction of the algorithm should be clear and unambiguous.
4. 4) Finiteness
The process should be terminated after a finite number of steps.
5. 5) Effectiveness
Every instruction must be basic enough to be carried out theoretically or by using paper and
pencil.

10. What do you understand by complexity of an algorithm? Write worst case


and best casecomplexity of linear search.
Complexity in algorithms refers to the amount of resources (such as time or
memory) required to solve a problem or perform a task. The most common measure
of complexity is time complexity, which refers to the amount of time an algorithm
takes to produce a result as a function of the size of the input.

The best-case performance for the Linear Search algorithm is when the search item
appears at the beginning of the list and is O(1). The worst-case performance is when
the search item appears at the end of the list or not at all. This would require N
comparisons, hence, the worse case is O(N).
11. Derive the Big O notation for f(n) = n2+2n+5.

12. Write an algorithm/pseudo code for linear search and mention the best case
and worstcase time complexity of Linear Search algorithm?
Linear Search (Array A, Value x)

Step 1: Set i to 1
Step 2: if i > n, then jump to step 7
Step 3: if A[i] = x then jump to step 6
Step 4: Set i to i + 1
Step 5: Go to step 2
Step 6: Print element x found at index i and jump to step 8
Step 7: Print element not found
Step 8: Exit
The best-case performance for the Linear Search algorithm is when the search item
appears at the beginning of the list and is O(1). The worst-case performance is when
the search item appears at the end of the list or not at all. This would require N
comparisons, hence, the worse case is O(N).
13. What is the purpose of calculating frequency count? Compute the frequency
count of thefollowing code fragment.
for(i=0;i<n;i++)
for(j=0;j<n;j++)
printf(“%d”,a[i][j]);

The frequency count method is one of the methods to analyze the Time complexity of
an algorithm. In this method, we count the number of times each instruction is executed.
Based on that we will calculate the Time Complexity.

14. Explain the System Life Cycle in detail.

System Life Cycle, Algorithms, Performance Analysis, Space Complexity, Time Complexity,
Asymptotic Notation, Complexity Calculation of Simple Algorithms.
SYSTEM LIFE CYCLE (SLC)
 Good programmers regard large scale computer programs as systems that contain many
complex interacting parts. (Systems: Large Scale Computer Programs.)
 As systems, these programs undergo a development process called System life cycle.(
SLC : Development Process of Programs)
Different Phases of System Life Cycle
1. Requirements
2. Analysis
3. Design
4. Refinement and coding
5. Verification

1. Requirement Phase:
 All programming projects begin with a set of specifications that defines the purpose of
that program.
 Requirements describe the information that the programmers are given (input) and the
results (output) that must be produced.
 Frequently the initial specifications are defined vaguely and we must develop rigorous
input and output descriptions that include all cases.

2. Analysis Phase
 In this phase the problem is break down into manageable pieces.
 There are two approaches to analysis:-bottom up and top down.
 Bottom up approach is an older, unstructured strategy that places an early emphasis on
coding fine points. Since the programmer does not have a master plan for the project,
the resulting program frequently has many loosely connected, error ridden segments.
 Top down approach is a structured approach divide the program into manageable
segments.
 This phase generates diagrams that are used to design the system.
 Several alternate solutions to the programming problem are developed and compared
during this phase
3. Design Phase
 This phase continues the work done in the analysis phase.
 The designer approaches the system from the perspectives of both data objects that the
program needs and the operations performed on them.
 The first perspective leads to the creation of abstract data types while the second
requires the specification of algorithms and a consideration of algorithm design
strategies.
Ex: Designing a scheduling system for university
Data objects: Students, courses, professors etc
Operations: insert, remove search etc
ie. We might add a course to the list of university courses, search for the courses taught
by some professor etc.
 Since abstract data types and algorithm specifications are language independent.
 We must specify the information required for each data object and ignore coding details.
Ex: Student object should include name, phone number, social security number etc.

4. Refinement and Coding Phase


 In this phase we choose representations for data objects and write algorithms for each
operation on them.
 Data objects representation can determine the efficiency of the algorithm related to it.
So we should write algorithms that are independent of data objects first.
 Frequently we realize that we could have created a much better system. (May be we
realize that one of our alternate design is superior than this). If our original design is
good, it can absorb changes easily.

5. Verification Phase
 This phase consists of
 developing correctness proofs for the program
 Testing the program with a variety of input data.
 Removing errors.
Correctness of Proofs
 Programs can be proven correct using proofs.(like mathematics theorem)
 Proofs are very time consuming and difficult to develop for large projects.
 Scheduling constraints prevent the development of complete sets of proofs for a
larger system.
 However, selecting algorithm that have been proven correct can reduce the
number of errors.
Testing

 Testing can be done only after coding.


 Testing requires working code and set of test data.
 Test data should be chosen carefully so that it includes all possible scenarios.
 Good test data should verify that every piece of code runs correctly.
 For example if our program contains a switch statement, our test data should be
chosen so that we can check each case within switch statement.
Error Removal

 If done properly, the correctness of proofs and system test will indicate erroneous
code.
 Removal of errors depends on the design and code.
 While debugging large undocumented program written in ‘spaghetti’ code, each
corrected error possibly generates several new errors.
 Debugging a well documented program that is divided into autonomous units
that interact through parameters is far easier. This especially true if each unit is
tested separately and then integrated into system.

15. How the performance of an algorithm is evaluated? Explain the best, worst
and averagecase analysis of an algorithm with the help of an example.

The performance of an algorithm is evaluated by analyzing its time and space complexity. Time
complexity refers to how long an algorithm takes to run, while space complexity refers to how
much memory an algorithm uses.
One way to analyze an algorithm's performance is to use best, worst, and average case analysis
Best case is the function which performs the minimum number of steps on input data of n
elements. Worst case is the function which performs the maximum number of steps on input
data of size n. Average case is the function which performs an average number of steps on input
data of n elements.
16. Write an algorithm to find the number of occurrence of each element in an
array andcalculate the frequency count of the algorithm.
for (i = 0; i < n; i++)
{
Count = 1;
for(j = i + 1; j < n; j++)
{
if(a[i] == a[j]) //Check for duplicate elements
{
Count++;
Freq[j] = 0; /* Make sure not to count frequency of same element again
}
}
if(Freq[i] != 0) /* If frequency of current element is not counted */
{
Freq[i] = Count;
}
}
In this two nested group is used so the time complexity is O(n2)

17. Describe the different notations used to describe the asymptotic running
time of analgorithm.
Asymptotic notations are mathematical tools used to analyze the performance of
algorithms by describing how an algorithm's running time changes as the input size
increases. They are also used to compare the performance of multiple algorithms.
There are mainly three asymptotic notations:

1.Big-O Notation (O-notation)

2.Omega Notation (Ω-notation)


3. Theta Notation (Θ-notation)
Big O Notation
From the definition: O (g(n)) = { f(n) : there exist positive constant c and n0 such that
f(n) <= c*g(n) for all n >= n0 }
It is the most widely used notation for Asymptotic analysis. It specifies the upper bound of
a function, i.e., the maximum time required by an algorithm or the worst-case time
complexity. In other words, it returns the highest possible output value (big-O) for a given
input.

Big Omega
The execution time serves as a lower bound on the algorithm’s time complexity.

It is defined as the condition that allows an algorithm to complete statement


execution in the shortest amount of time.

Let g and f be the function from the set of natural numbers to itself. The
function f is said to be Ω(g), if there is a constant c > 0 and a natural number
n0 such that c*g(n) ≤ f(n) for all n ≥ n0

Big Theta
Theta notation encloses the function from above and below. Since it represents the upper and
the lower bound of the running time of an algorithm, it is used for analyzing the average-case
complexity of an algorithm.
Let g and f be the function from the set of natural numbers to itself. The function f is said to be
Θ(g), if there are constants c1, c2 > 0 and a natural number n0 such that c1* g(n) ≤ f(n) ≤ c2 *
g(n) for all n ≥ n0

18. Calculate the frequency count of the statement x = x+1; in the following code
segment
for (i = 0; i< n; i++)
for (j = 0; j< n; j++)
x = x + 1;

You might also like