Enayatullah Atal: Mid Term Assignment
Enayatullah Atal: Mid Term Assignment
1. Write an algorithm or Pseudocode to read an array of N integer values, calculate its sum and
then print the sum.
#include <stdio.h>
Step 1: start int main()
{
Step 2: declare four variables n, sum, c, and element as integer type. int n, sum = 0, c, element;
printf("How many Elements you want
Step 3: read the number of elements for array.
to add?\n");
Read(n) scanf("%d", &n);
printf("Enter %d element\n", n);
Step 4: used for loop to read each element and calculate its sum. for (c = 1; c <= n; c++)
For (c=1; c<=n; c++) {
scanf("%d", &element);
Read (each element) sum = sum + element;
}
Sum= sum + element
printf("Sum of the all elements are =
Print (sum) %d\n", sum);
return 0;
Step 5: End
}
2. What is a constant time algorithm, linear time, quadratic and exponential time algorithm?
A. Constant time Algorithm:
it requires the same amount of time that does not depend on the size of the input.
Or
A constant time algorithm doesn’t change its running time in response to the input data. No
matter the size of the data it receives, the algorithm takes the same amount of time to run.
We denote this as a time complexity of O (1).
Ex: A good example of O (1) time is accessing a value with an array index.
Other examples include: push () and pop () operations on an array.
The number of operations it performs scales in proportion to the square of the input
Time complexity is O (N2)
Ex: selection sort
D. Exponential time algorithm:
Exponential time function grows very fast. it is usable for small problems.
Time complexity is (2n) and it is very bad.
Ex: Fibonacci sequence
3. Describe the following operations on an array so that the time it takes does not depend on the
array’s size n.
a. Delete the ith element of an array (1≤i ≤n).
b. Delete the ith element of a sorted array (the remaining array has to stay sorted, of course).
A. Replace the ith element with the last element and decrease the array size by 1.
B. Replace the ith element with a special symbol that cannot be a value of the array’s element (e.g.,
0 for an array of positive numbers) to mark the ith position as empty. (This method is sometimes
called the “lazy deletion”.)
#include<stdio.h>
int multiply(int,int);
int main(){
int a,b,product;
printf("Enter any two integers: ");
scanf("%d%d",&a,&b);
product = multiply(a,b);
return 0;
}
return product;
}
5. Explain in detail about the average case, best case, and worst-case time complexity algorithm?
A. Best case:
fastest time to complete, with optimal inputs chosen.
For example, the best case for a sorting algorithm would be data that's already sorted.
B. Worst case:
slowest time to complete, with pessimal inputs chosen.
For example, the worst case for a sorting algorithm might be data that's sorted in reverse
order (but it depends on the particular algorithm)
C. Average case:
arithmetic mean. Run the algorithm many times, using many different inputs of size n that
come from some distribution that generates these inputs (in the simplest case, all the
possible inputs are equally likely), compute the total running time (by adding the individual
times), and divide by the number of trials. You may also need to normalize the results based
on the size of the input sets.
6. Write a pseudo code that finds the sum of two 3*3 matrices and then calculate its running time.
Step 3: store the count of each element at their respective index in count array.
0 1 2 2 1 0 0 0 1
0 1 2 3 4 5 6 7 8
Step 5: store cumulative sum of the elements of the count array. It helps in placing the elements
into the correct index of the stored array.
0 1 3 5 6 6 6 6 7
0 1 2 3 4 5 6 7 8
Step 6: find the index of each element of the original array in the count array. This gives the
cumulative count. Place the element at the index calculated as shown in figure below.
array 4 2 2 8 3 3 1
0 1 2 3 4 5 6 7 8
Count 0 1 3 5 6 6 6 6 7
6-1=5
0 1 2 3 4 5 6
Output= 1 2 2 3 3 4 8
B. Radix Sort:
Radix sort is a sorting technique that sorts the elements by first grouping the individual digits
of the same place value. Then sort the elements according to their increasing/decreasing
order.
Suppose, we have an array of 8 elements. First we will sort elements based on the value of
the unit place. Then we will sort elements based on the value of tenth place. This process
goes on until the lase significant place.
For example: array: 121, 432, 564, 23, 1, 45, 788 sorted according to radix sort as
shown in the figure below.
1 2 1 0 0 1 0 0 1
0 0 1 1 2 1 0 2 3
4 3 2 0 2 3 0 4 5
0 2 3 4 3 2 1 2 1
5 6 4 0 4 5 4 3 2
0 4 5 5 6 4 5 6 4
7 8 8 7 8 8 7 8 8
The divide and conquer method: it reduces the degree of difficulty since it divides the problem
into sub problems that are easily solvable, and usually runs faster than other algorithms would.
It also uses memory caches effectively.
Divide: This involves dividing the problem into some sub problem.
Conquer: Sub problem by calling recursively until sub problem solved.
Combine: The Sub Problem Solved so that we will get find problem solution
The Divide and Conquer Paradigm is an algorithm design paradigm which uses this simple
process: It Divides the problem into smaller sub-parts until these sub-parts become simple
enough to be solved, and then the sub parts are solved recursively, and then the solutions to
these sub-parts can be combined to give a solution to the original problem. In other words,
this paradigm follows three basic steps; Step 1: Break down the problem into smaller sub-
problems, Step 2: Solve these separate sub-problems, and Step 3: Combine the solutions of
the sub-problems
The first, and probably most recognizable benefit of the divide and conquer paradigm is the
fact that it allows us to solve difficult and often impossible looking problems, ACM
Transactions on Computational Logic, Vol. V, No. N, Month 20YY. · 5 such as the Tower of
Hanoi, which is a mathematical game or puzzle. Being given a difficult problem can often be
discouraging if there is no idea how to go about solving it. However, with the divide and
conquer method, it reduces the degree of difficulty since it divides the problem into sub
problems that are easily solvable, and usually runs faster than other algorithms would.
Another advantage to this paradigm is that it often plays a part in finding other efficient
algorithms, and in fact it was the central role in finding the quick sort and merge sort
algorithms. It also uses memory caches effectively. The reason for this is the fact that when
the sub problems become simple enough, they can be solved within a cache, without having
to access the slower main memory, which saves time and makes the algorithm more efficient.
And in some cases, it can even produce more precise outcomes in computations with rounded
arithmetic than iterative methods would. Packaged with all of these advantages, however, are
some weaknesses in the process
[Wikipedia]
9. Give your own idea about quick sort?
Quicksort
Quicksort is a divide-and-conquer algorithm. It works by selecting a 'pivot' element from the
array and partitioning the other elements into two sub-arrays, according to whether they are
less than or greater than the pivot. The sub-arrays are then sorted recursively.
The time complexity of Quicksort is O (n log n) in the best case, O (n log n) in the average
case, and O(n^2) in the worst case. But because it has the best performance in the average
case for most inputs, Quicksort is generally considered the “fastest” sorting algorithm.
Slurm array jobs: A SLURM job array is a collection of jobs that differ from each other by
only a single index parameter. Creating a job array provides an easy way to group
related jobs together.
R Programming language:
R is a programming language and free software environment for statistical computing and
graphics supported by the R Foundation for Statistical Computing. The R language is widely
used among statisticians and data miners for developing statistical software and data
analysis.
Introduction
Simulation mean’s to evaluate various methodologies that allows the user to study the
efficiency of model and model’s behavior under different condition. This paper considers a
general framework that is useful in performing large-scale computational simulation.
In this paper simulation method is designed and implemented in R. R include features that
are in most common language. Such as loops, random number generator and etc. these
features facilitate the generation and analysis of data. R is open source and can be run across
a variety of operating system.
Basic structure
There are many R scripts and their parameters to illustrate the divide-and-conquer phases
in large-scale simulation.
C. Parallel Framework
In this framework, simulation and aggregation jobs are both executed in parallel. The
simulation task is split into independent jobs that are executed by the Slurm array job
in parallel.