0% found this document useful (0 votes)
62 views11 pages

Enayatullah Atal: Mid Term Assignment

The document contains a mid-term assignment for a class consisting of 9 questions related to algorithms and data structures. It asks the student to write pseudocode or algorithms to: 1) read an array, calculate its sum, and print the sum; 2) explain constant, linear, quadratic, and exponential time complexity; 3) describe operations to delete elements from an array or sorted array in constant time; 4) write a recursive program to multiply two numbers; 5) explain average, best, and worst case time complexity; 6) write pseudocode to sum two 3x3 matrices and calculate its running time; 7) explain two linear sorting techniques (counting sort and radix sort); 8) describe the benefit of divide and conquer

Uploaded by

enayat atal
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
62 views11 pages

Enayatullah Atal: Mid Term Assignment

The document contains a mid-term assignment for a class consisting of 9 questions related to algorithms and data structures. It asks the student to write pseudocode or algorithms to: 1) read an array, calculate its sum, and print the sum; 2) explain constant, linear, quadratic, and exponential time complexity; 3) describe operations to delete elements from an array or sorted array in constant time; 4) write a recursive program to multiply two numbers; 5) explain average, best, and worst case time complexity; 6) write pseudocode to sum two 3x3 matrices and calculate its running time; 7) explain two linear sorting techniques (counting sort and radix sort); 8) describe the benefit of divide and conquer

Uploaded by

enayat atal
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 11

Name: Enayatullah Atal

ID: K1S20MCS0003 Date: 4/June/2020


Class: 1st Semester of MCS-BU Subj: Advanced Algorithm D&A

Mid Term Assignment


Solve the given question and submit it before exam?
1. Write an algorithm or Pseudocode to read an array of N integer values, calculate its sum and then
print the sum
2.What is a constant time algorithm, linear time, quadratic and exponential time algorithm?
3.Describe the following operations on an array so that the time it takes does not depend on the
array’s size n.
a. Delete the ith element of an array (1≤i ≤n).
b. Delete the ith element of a sorted array (the remaining array has to stay sorted, of course).
4. Write a program for Multiplication of two number using recursive function
5.Explain in detail about the average case, best case, and worst-case time complexity algorithm
6. Write a pseudo code that finds the sum of two 3*3 matrices and then calculate its running time.
7.Explain any two linear sorting technique
8. What is the benefit of the divide and conquer technique?
9. Give your own idea about quick sort?

1. Write an algorithm or Pseudocode to read an array of N integer values, calculate its sum and
then print the sum.

#include <stdio.h>
Step 1: start int main()
{
Step 2: declare four variables n, sum, c, and element as integer type. int n, sum = 0, c, element;
printf("How many Elements you want
Step 3: read the number of elements for array.
to add?\n");
Read(n) scanf("%d", &n);
printf("Enter %d element\n", n);
Step 4: used for loop to read each element and calculate its sum. for (c = 1; c <= n; c++)
For (c=1; c<=n; c++) {
scanf("%d", &element);
Read (each element) sum = sum + element;
}
Sum= sum + element
printf("Sum of the all elements are =
Print (sum) %d\n", sum);
return 0;
Step 5: End
}
2. What is a constant time algorithm, linear time, quadratic and exponential time algorithm?
A. Constant time Algorithm:

it requires the same amount of time that does not depend on the size of the input.
Or
A constant time algorithm doesn’t change its running time in response to the input data. No
matter the size of the data it receives, the algorithm takes the same amount of time to run.
We denote this as a time complexity of O (1).

Ex: A good example of O (1) time is accessing a value with an array index.
Other examples include: push () and pop () operations on an array.

B. Linear time algorithm:


its time execution is directly depending on the input size.
This means that the more data you have the more time it will take to process.
Time complexity is O (n).
Ex: find the minimum value in an array

C. Quadratic time algorithm:

The number of operations it performs scales in proportion to the square of the input
Time complexity is O (N2)
Ex: selection sort
D. Exponential time algorithm:

Exponential time function grows very fast. it is usable for small problems.
Time complexity is (2n) and it is very bad.
Ex: Fibonacci sequence

3. Describe the following operations on an array so that the time it takes does not depend on the
array’s size n.
a. Delete the ith element of an array (1≤i ≤n).
b. Delete the ith element of a sorted array (the remaining array has to stay sorted, of course).
A. Replace the ith element with the last element and decrease the array size by 1.

B. Replace the ith element with a special symbol that cannot be a value of the array’s element (e.g.,
0 for an array of positive numbers) to mark the ith position as empty. (This method is sometimes
called the “lazy deletion”.)

4. Write a program for Multiplication of two number using recursive function?

#include<stdio.h>

int multiply(int,int);

int main(){

int a,b,product;
printf("Enter any two integers: ");
scanf("%d%d",&a,&b);

product = multiply(a,b);

printf("Multiplication of two integers is %d",product);

return 0;
}

int multiply(int a,int b){

static int product=0,i=0;

if(i < a){


product = product + b;
i++;
multiply(a,b);
}

return product;
}
5. Explain in detail about the average case, best case, and worst-case time complexity algorithm?

A. Best case:
fastest time to complete, with optimal inputs chosen.
For example, the best case for a sorting algorithm would be data that's already sorted.

B. Worst case:
slowest time to complete, with pessimal inputs chosen.
For example, the worst case for a sorting algorithm might be data that's sorted in reverse
order (but it depends on the particular algorithm)

C. Average case:
arithmetic mean. Run the algorithm many times, using many different inputs of size n that
come from some distribution that generates these inputs (in the simplest case, all the
possible inputs are equally likely), compute the total running time (by adding the individual
times), and divide by the number of trials. You may also need to normalize the results based
on the size of the input sets.

6. Write a pseudo code that finds the sum of two 3*3 matrices and then calculate its running time.

Step 1: start program


Step 2: declare four variables and define three array
Integer m, n, c, d, first [3][3], second [3][3], sum [3][3]
Step 3: read the number of row and columns of matrix
Read (row, columns)
Step 4: read the elements of first matrix, for this purpose used nested for loop
For (c=0; c<m c++)
For (d=0; d<n d++)
Read (read first matrix[c][d])
Step 5: read the elements of second matrix, used for loop
For (c=0; c<m c++)
For (d=0; d<n d++)
Read (read second matrix[c][d])
Step 6: calculate and sum the entered matrix, used for loop
For (c=0; c<m c++)
For (d=0; d<n d++)
Sum[c][d] = first[c][d] + second[c][d])
Print (sum [c][d])
Step [finish]
b. time complexity.
In this program we used many nested loop and for each nested loop we used the following
time complexity formula:
First nested loop = n*n*c= n2 *c
Second nested loop = n*n*c= n2 *c
Third nested loop = n*n*c= n2 *c

Then we adding all nested loop = n2 *c + n2 *c + n2 *c


The constant time of my computer= 0.001 second

7. Explain any two linear sorting technique.


A. Counting sort:
Counting sort is a sorting algorithm that sorts the elements of an array by counting the number of
occurrences of each unique element in the array. The count is stored in an auxiliary array and the
sorting is done by mapping the count as an index of the auxiliary array
For example, we want to sort this array by using of counting sort-
Given array= 4 2 2 8 3 3 1

Step 1: find the maximum element. Max=8


Step 2: initialize an array of length Max+1 with all elements 0. This array is used for sorting the count
of the elements in the array.
Count array
0 0 0 0 0 0 0 0 0
0 1 2 3 4 5 6 7 8

Step 3: store the count of each element at their respective index in count array.

0 1 2 2 1 0 0 0 1
0 1 2 3 4 5 6 7 8

Step 5: store cumulative sum of the elements of the count array. It helps in placing the elements
into the correct index of the stored array.

0 1 3 5 6 6 6 6 7
0 1 2 3 4 5 6 7 8

Step 6: find the index of each element of the original array in the count array. This gives the
cumulative count. Place the element at the index calculated as shown in figure below.

array 4 2 2 8 3 3 1

0 1 2 3 4 5 6 7 8

Count 0 1 3 5 6 6 6 6 7

6-1=5
0 1 2 3 4 5 6

Output= 1 2 2 3 3 4 8

B. Radix Sort:
Radix sort is a sorting technique that sorts the elements by first grouping the individual digits
of the same place value. Then sort the elements according to their increasing/decreasing
order.
Suppose, we have an array of 8 elements. First we will sort elements based on the value of
the unit place. Then we will sort elements based on the value of tenth place. This process
goes on until the lase significant place.
For example: array: 121, 432, 564, 23, 1, 45, 788 sorted according to radix sort as
shown in the figure below.

1 2 1 0 0 1 0 0 1

0 0 1 1 2 1 0 2 3

4 3 2 0 2 3 0 4 5

0 2 3 4 3 2 1 2 1

5 6 4 0 4 5 4 3 2

0 4 5 5 6 4 5 6 4

7 8 8 7 8 8 7 8 8

8. What is the benefit of the divide and conquer technique?

The divide and conquer method: it reduces the degree of difficulty since it divides the problem
into sub problems that are easily solvable, and usually runs faster than other algorithms would.
It also uses memory caches effectively.
Divide: This involves dividing the problem into some sub problem.
Conquer: Sub problem by calling recursively until sub problem solved.
Combine: The Sub Problem Solved so that we will get find problem solution
The Divide and Conquer Paradigm is an algorithm design paradigm which uses this simple
process: It Divides the problem into smaller sub-parts until these sub-parts become simple
enough to be solved, and then the sub parts are solved recursively, and then the solutions to
these sub-parts can be combined to give a solution to the original problem. In other words,
this paradigm follows three basic steps; Step 1: Break down the problem into smaller sub-
problems, Step 2: Solve these separate sub-problems, and Step 3: Combine the solutions of
the sub-problems
The first, and probably most recognizable benefit of the divide and conquer paradigm is the
fact that it allows us to solve difficult and often impossible looking problems, ACM
Transactions on Computational Logic, Vol. V, No. N, Month 20YY. · 5 such as the Tower of
Hanoi, which is a mathematical game or puzzle. Being given a difficult problem can often be
discouraging if there is no idea how to go about solving it. However, with the divide and
conquer method, it reduces the degree of difficulty since it divides the problem into sub
problems that are easily solvable, and usually runs faster than other algorithms would.
Another advantage to this paradigm is that it often plays a part in finding other efficient
algorithms, and in fact it was the central role in finding the quick sort and merge sort
algorithms. It also uses memory caches effectively. The reason for this is the fact that when
the sub problems become simple enough, they can be solved within a cache, without having
to access the slower main memory, which saves time and makes the algorithm more efficient.
And in some cases, it can even produce more precise outcomes in computations with rounded
arithmetic than iterative methods would. Packaged with all of these advantages, however, are
some weaknesses in the process

[Wikipedia]
9. Give your own idea about quick sort?
Quicksort
Quicksort is a divide-and-conquer algorithm. It works by selecting a 'pivot' element from the
array and partitioning the other elements into two sub-arrays, according to whether they are
less than or greater than the pivot. The sub-arrays are then sorted recursively.

The time complexity of Quicksort is O (n log n) in the best case, O (n log n) in the average
case, and O(n^2) in the worst case. But because it has the best performance in the average
case for most inputs, Quicksort is generally considered the “fastest” sorting algorithm.

Summarize of Research paper-3

Performance analysis of divide-and-strategies for Large scale simulations in R


Abstract
This research paper introduces the family of Divide-and-conquer strategies that can help
domain experts perform large-scale simulation by scaling up their analysis code written in R.
and used R as analysis and computing language that allow advanced users to provide
custom R scripts and variables to be fully embedded into the large-scale analyses workflow
in R. the whole process will divide large-scale simulation tasks and conquer tasks with
Slurm array job and R.

Slurm array jobs: A SLURM job array is a collection of jobs that differ from each other by
only a single index parameter. Creating a job array provides an easy way to group
related jobs together.
R Programming language:
R is a programming language and free software environment for statistical computing and
graphics supported by the R Foundation for Statistical Computing. The R language is widely
used among statisticians and data miners for developing statistical software and data
analysis.
Introduction
Simulation mean’s to evaluate various methodologies that allows the user to study the
efficiency of model and model’s behavior under different condition. This paper considers a
general framework that is useful in performing large-scale computational simulation.
In this paper simulation method is designed and implemented in R. R include features that
are in most common language. Such as loops, random number generator and etc. these
features facilitate the generation and analysis of data. R is open source and can be run across
a variety of operating system.

Basic structure

There are many R scripts and their parameters to illustrate the divide-and-conquer phases
in large-scale simulation.

All models have three main parameters:


 B-Batch size
 I - Number of intermediate files
 D – Dimension of intermediate files

The two scripts are:


 Simulation.R- executed as a Slurm task array job that takes two parameters (B, I). B
is the batch size and I is the number of intermediate output files.
 Aggregate.R – this script first checks to see if all B simulations have completed
execution. It checks for the last intermediate file in each folder.
Four divide-and-conquer strategies to process and generate large dataset.
A. Serial framework (SsAs)
In this model, simulation and aggregation are executed sequentially. The simulation
job performs multiple tasks sequentially.

B. Parallel – serial framework (SpAs)


In this model, the simulation task is split into independent jobs that are executed by
Slurm array in parallel.

C. Parallel Framework
In this framework, simulation and aggregation jobs are both executed in parallel. The
simulation task is split into independent jobs that are executed by the Slurm array job
in parallel.

D. Improved Parallel Framework


In this improved parallel framework, simulation and aggregation is integrated into one
script. Both tasks are executed in parallel by a single Slurm array job submission by
running on the same set of allocated compute nodes.

You might also like