Dca6105 - Computer Architecture

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 6

DCA6105 - COMPUTER ARCHITECTURE

1. What do you understand by parallelism in computer architecture? Discuss the


different classes of parallelism and parallel architectures?

Parallel computer architecture gives a new level to computer system development by


utilising an increasing number of processors. In principle, performance obtained
through the use of a large number of processors is greater than the performance
obtained through the use of a single processor at any given time.

Parallel computing refers to the process of breaking down larger problems into
smaller, independent, often similar parts that can be executed simultaneously by
multiple processors communicating via shared memory, the results of which are
combined upon completion as part of an overall algorithm. The primary goal of
parallel computing is to increase available computation power for faster application
processing and problem solving.

Parallel computing infrastructure is typically housed within a single data center where
several processors are installed in a server rack; computation requests are distributed
in small chunks by the application server that are then executed simultaneously on
each server.

There are generally four types of parallel computing, available from both proprietary
and open source parallel computing vendors -- bit-level parallelism, instruction-level
parallelism, task parallelism, or data-level parallelism:

Bit-level parallelism: increases processor word size, which reduces the quantity of
instructions the processor must execute in order to perform an operation on variables
greater than the length of the word.
Instruction-level parallelism: the hardware approach works upon dynamic
parallelism, in which the processor decides at run-time which instructions to execute
in parallel; the software approach works upon static parallelism, in which the
compiler decides which instructions to execute in parallel
Task parallelism: a form of parallelization of computer code across multiple
processors that runs several different tasks at the same time on the same data
Data-level parallelism (DLP): Instructions from a single stream operate concurrently
on several data – Limited by non-regular data manipulation patterns and by memory
bandwidth.

Why parallel computing? 


1. The whole real-world runs in dynamic nature i.e. many things happen at a certain
time but at different places concurrently. This data is extensively huge to manage.
2. Real-world data needs more dynamic simulation and modeling, and for achieving
the same, parallel computing is the key.
3. Parallel computing provides concurrency and saves time and money.
4. Complex, large datasets, and their management can be organized only and only
using parallel computing’s approach.
5. Ensures the effective utilization of the resources. The hardware is guaranteed to be
used effectively whereas in serial computation only some part of the hardware was
used and the rest rendered idle.
6. Also, it is impractical to implement real-time systems using serial computing.
Applications of Parallel Computing: 
1. Databases and Data mining.
2. Real-time simulation of systems.
3. Science and Engineering.
4. Advanced graphics, augmented reality, and virtual reality.
5. Limitations of Parallel Computing: 
6. It addresses such as communication and synchronization between multiple sub-
tasks and processes which is difficult to achieve.
7. The algorithms must be managed in such a way that they can be handled in a
parallel mechanism.
8. The algorithms or programs must have low coupling and high cohesion. But it’s
difficult to create such programs.
9. More technically skilled and expert programmers can code a parallelism-based
program well.

2. What do you understand by process and threads? Differentiate between them?


A process is an instance of a program that is being executed. When we run a program,
it does not execute directly. It takes some time to follow all the steps required to
execute the program, and following these execution steps is known as a process.
A process can create other processes to perform multiple tasks at a time; the created
processes are known as clone or child process, and the main process is known as
the parent process. Each process contains its own memory space and does not share it
with the other processes. It is known as the active entity. A typical process remains in
the below form in memory.

A process in OS can remain in any of the following states:


 NEW: A new process is being created.
 READY: A process is ready and waiting to be allocated to a processor.
 RUNNING: The program is being executed.
 WAITING: Waiting for some event to happen or occur.
 TERMINATED: Execution finished.

Thread: A thread is the subset of a process and is also known as the lightweight
process. A process can have more than one thread, and these threads are managed
independently by the scheduler. All the threads within one process are interrelated to
each other. Threads have some common information, such as data segment, code
segment, files, etc., that is shared to their peer threads. But contains its own registers,
stack, and counter.

Types of Threads: There are two types of threads:


1. User Level Thread: They are only managed by users, and the kernel does not have
its information. These are faster, easy to create and manage. The kernel takes all these
threads as a single process and handles them as one process only. The user-level
threads are implemented by user-level libraries, not by the system calls.
2. Kernel-Level Thread: They are handled by the Operating system and managed by
its kernel. These threads are slower than user-level threads because context
information is managed by the kernel. To create and implement a kernel-level thread,
we need to make a system call.

Key Features:
1. Threads share data, memory, resources, files, etc., with their peer threads within a
process.
2. One system call is capable of creating more than one thread.
3. Each thread has its own stack and register.
4. Threads can directly communicate with each other as they share the same address
space.
5. Threads need to be synchronized in order to avoid unexpected scenarios.

Difference between Process and Thread:

S.No.Process Thread
1. Process means any program is inThread means segment of a process.
execution.
2. Process takes more time toThread takes less time to terminate.
terminate.
3. It takes more time for creation. It takes less time for creation.
4. It also takes more time for contextIt takes less time for context switching.
switching.
5. Process is less efficient in term of Thread is more efficient in term of
communication. communication.
6. Process consume more resources. Thread consume less resources.
7. Process is isolated. Threads share memory.
8. Process is called heavy weightThread is called light weight process.
process.
9. Process switching uses interface inThread switching does not require to call a
operating system. operating system and cause an interrupt to
the kernel.
10. If one process is blocked then it will Second thread in the same task couldnot
not effect the execution of otherrun, while one server thread is blocked.
process
11. Process has its own Process ControlThread has Parents’ PCB, its own Thread
Block, Stack and Address Space. Control Block and Stack and common
Address space.

3. Explain Amdahl’s Law of computer design.


It is named after computer scientist Gene Amdahl( a computer architect from IBM
and Amdahl corporation), and was presented at the AFIPS Spring Joint Computer
Conference in 1967. It is also known as Amdahl’s argument. It is a formula which
gives the theoretical speedup in latency of the execution of a task at a fixed workload
that can be expected of a system whose resources are improved. In other words, it is a
formula used to find the maximum improvement possible by just improving a
particular part of a system. It is often used in parallel computing to predict the
theoretical speedup when using multiple processors.

Speedup-
Speedup is defined as the ratio of performance for the entire task using the
enhancement and performance for the entire task without using the enhancement or
speedup can be defined as the ratio of execution time for the entire task without using
the enhancement and execution time for the entire task using the enhancement.
If Pe is the performance for entire task using the enhancement when possible, Pw is
the performance for entire task without using the enhancement, Ew is the execution
time for entire task without using the enhancement and Ee is the execution time for
entire task using the enhancement when possible then,

Speedup = Pe/Pw
or
Speedup = Ew/Ee

Amdahl’s law uses two factors to find speedup from some enhancement –

Fraction enhanced – The fraction of the computation time in the original computer
that can be converted to take advantage of the enhancement. For example- if 10
seconds of the execution time of a program that takes 40 seconds in total can use an
enhancement , the fraction is 10/40. This obtained value is Fraction Enhanced.
Fraction enhanced is always less than 1.
Speedup enhanced – The improvement gained by the enhanced execution mode; that
is, how much faster the task would run if the enhanced mode were used for the entire
program. For example – If the enhanced mode takes, say 3 seconds for a portion of
the program, while it is 6 seconds in the original mode, the improvement is 6/3. This
value is Speedup enhanced.
Speedup Enhanced is always greater than 1.
The overall Speedup is the ratio of the execution time:-

Proof :-
Let Speedup be S, old execution time be T, new execution time be T’ , execution time
that is taken by portion A(that will be enhanced) is t, execution time that is taken by
portion A(after enhancing) is t’, execution time that is taken by portion that won’t be
enhanced is tn, Fraction enhanced is f’, Speedup enhanced is S’.
Now from the above equation,

You might also like