Unit 4
Unit 4
Unit 4
Being simple latency means whenever you have given input to the system and the total time
period it takes to give output so that particular time period/interval is known as
latency. Actually, latency is the in-between handling time of computers, as some of you may
think that whenever some system connects with another system it happens directly but no it
isn’t, the signal or data follows the proper trace route for reaching its final destination.
Types of Latency
Interrupt Latency: Interrupt Latency can be described as the time that it takes for a
computer to act on a signal.
Fiber Optic Latency: Fiber Optic Latency is the latency that comes from traveling
some distance through fiber optic cable.
Internet Latency: Internet Latency is the type of latency that totally depends on the
distance.
WAN Latency: WAN Latency is the delay that is happened if the resource is
requested from a server or another computer or anywhere else.
Audio Latency: Audio Latency can be easily stated as the delay between the creation
of the sound and the hearing of the sound.
Operational Latency: Operational Latency can be described as the time taken during
operations if they are performed in a flow.
Mechanical Latency: Mechanical Latency is basically the delay that happens from
the mechanical device to the output.
Computer and OS Latency: Combined and OS Latency is simple due to the delay
between the input and the output.
Low Memory Space: The common memory space creates a problem for OS in
maintaining the RAM needs.
Propagation: The number of time signals take to transmit the data from one source to
another.
Multiple Routers: As I have discussed before that data travels a full traceroute means
it travels from one router to another router which increases the latency, etc.
Grain size: Grain size/ Granularity are a measure that defines how much computation is
involved in a process. Grain size is concluded by counting number of instructions in a
program segment. The subsequent types of grain sizes have been recognized.
2) Medium Grain: This type includes nearly less than 500 instructions.
3) Coarse Grain: This type includes nearly greater than or equal to one thousand instruction.
Instruction level: It is the lowest level and degree of parallelism is highest at this level. Fine
grain size is used at statement or instruction level as just few instructions make the grain size
here. The fine grain size may perhaps vary according to type of the program. E.g. for
scientific applications, Instruction level grain size may be higher. As the higher degree of
parallelism is able to be achieved at this level, the overhead for a programmer would be more.
2) Loop Level: This is other stage of parallelism where iterative loop instructions able to be
parallelized. Fine grain size is used at this stage too. Simple loops in program are simple to
parallelize whereas the recursive loops are hard. This kind of parallelism can be achieved by
the compilers.
4) Program Level: It is the last level consisting of independent programs for parallelism.
Coarse grain size is used at this stage including tens of thousands of instructions. Time
sharing is attained at this level of parallelism. Parallelism at this stage has been exploited
through the operating system.
Types of Multiprocessors
Symmetric Multiprocessors
Each processor in these systems runs a similar version of the operating system and
communicates with the others. There is no master-slave connection between the processors
because they are all peer-to-peer.
The Encore version of Unix for the Multimax Computer is a symmetric multiprocessing
system.
Asymmetric Multiprocessors
Even if one processor fails in a multiprocessor system, the system will not come to a halt.
The ability to work seamlessly even in the case of hardware failure can be defined as graceful
degradation. If one of the five processors in a multiprocessor system fails, the remaining four
processors continue to work. As a result, rather than coming to a complete stop, the machine
slows down.
Increasing Throughout
The system’s throughput increases as several processors work together, indicating the number
of processes done per unit of time increases. The throughput increases by a factor of N when
there are N processors.
Since multiprocessor systems share data storage, peripheral devices, power supply, and other
resources, they are less expensive in the long run than single-processor systems. If several
processes share data, it is preferable to schedule them on multiprocessor systems with shared
data rather than separate computer systems with different copies of the data.
Characteristics of Multiprocessor
1. Parallel Processing: This requires the use of many processors at the same time. These
processors are designed to do a particular task using a single architecture. Processors
are generally identical, and they operate together to create the effect that the users are
the only individuals who are using the system. In reality, several others are trying to
use the system in the first place.
4. Pipelining: Besides supercomputing, this is a method that divides a task into multiple
subtasks that must be completed in a specified order. Each subtask is aided by the
functional units. The devices are connected serially, and they all work at the same
time.
5. Vector Computing: This is a method that divides a task into multiple subtasks that
must be completed in a specified order. Each subtask is aided by the functional units.
The devices are connected serially, and they all work at the same time.
6. Systolic: Pipelining is similar, but the units are not organised linearly. Systolic steps
are often tiny and numerous, and they are conducted in lockstep. This is more
commonly used in specialised hardware like image or signal processors.
Traditional computers are founded on a control flow structure by which the series of program
implementation is particularly established in the user program. Data flow computers have a
high degree of parallelism at the fine-grain instruction-level reduction computers are based on
a demand-driven method which commences operation based on the demand for its result by
other computations. Data flow & control flow computers − There are mainly two sorts of
computers as data flow computers are connectional computers depends on the Von Neumann
machine. It transfers out instructions under program flow control whereas the control flow
computer implements instructions under the availability of information. Control flow
Computers − Control Flow computers occupy shared memory to influence program
instructions and data objects. Variables in shared memory are upgraded by some instructions.
The implementation of one instruction can create side effects on various instructions because
memory is shared. In few cases, the side effects avoid parallel processing from taking place.
A uniprocessor computer is genetically sequential because of the use of a control-driven
structure.
Data Flow Computers − In a data flow computer, the running of instruction is determined
by data availability rather than being directed by a program counter. In this concept, any
instruction must be ready for implementation whenever operands turn available.
The instructions in a data-driven program are not controlled in some way. Rather than
being saved in shared memory, information is precisely held inside instructions.
Computational results are transferred directly between instructions. The information created
by instruction will be replicated into multiple copies and forwarded directly to all needy
instructions.
This data-driven design needed no shared memory, no program counter, and no control
sequencer. It needed a special method to identify data availability, match data tokens with
needy instructions, and allow the group reaction of asynchronous instructions execution.
Control flow refers to the path the execution takes in a program, and sequential
programming that focuses on explicit control flow using control structures like loops or
conditional statements is called imperative programming. In an imperative model, data may
follow the control flow, but the main question is about the order of execution.
Dataflow abstracts over explicit control flow by placing the emphasis on the routing and
transformation of data and is part of the declarative programming paradigm. In a dataflow
model, control follows data and computations are executed implicitly based on data
avail ability.
Concurrency control refers to the use of explicit mechanisms like locks to synchronize
interdependent concurrent computations. It is a matter of emphasis – control flow schedules
data movement, or data movement implies transfer of control.
SIMD
SIMD stands for 'Single Instruction and Multiple Data Stream'. It represents an
organization that includes many processing units under the supervision of a common control
unit.
All processors receive the same instruction from the control unit but operate on different
items of data.
The shared memory unit must contain multiple modules so that it can communicate with all
the processors simultaneously.
SIMD is mainly dedicated to array processing machines. However, vector processors can also
be seen as a part of this group.