Unit 3
Unit 3
Types:
Advantages
1. As several processors share their job between one and
another system, the work is completed with collaboration. It
suggests that these systems are dependable.
2. When numerous processors are connected, it aids in
matching the needs of an application. At the same time, a
multiprocessor system saves money by eliminating the need
for centralized capabilities. However, this structure allows for
future expansion.
3. It aids in enhancing the authenticity of the system. A failure in
any one component of a multiprocessor system has a limited
impact on the rest of the system.
4. It aids in enhancing the system's cost or performance ratio.
5. There is a larger burden in a single processor system
because several processes should be executed
simultaneously. However, several processes are executed
just a few times in a multiprocessor system. It means
multiprocessor CPUs use less power than a single processor.
Disadvantages
As the multicomputer can transmit messages between the
processors, the task may be categorized among the CPUs
to be completed. Therefore, the multicomputer may be
utilized for distributed computation. A multicomputer is
easier and less expensive to develop than a
multiprocessor. On the other hand, programming a
multicomputer is complex.
Flynn’s Taxonomy
Flynn's taxonomy is a specific classification of parallel computer
architectures that are based on the number of concurrent
instruction (single or multiple) and data streams (single or
multiple) available in the architecture.
1. Parallel Programming:
Characteristics:
o Shared memory: In most parallel systems, multiple
processors or cores have access to shared memory.
o Fine-grained parallelism: The program is broken down into
small tasks that are executed concurrently.
o Tightly coupled systems: The processors are often on the
same physical machine and are interconnected with high-
speed communication channels.
Example:
o Multi-core processors in modern computers allow for parallel
execution of programs. For instance, a program that
processes a large dataset can be divided into smaller
chunks that are processed in parallel on different cores,
speeding up the computation.
2. Concurrent Programming:
3. Distributed Programming:
Characteristics:
o Independent memory: Each machine has its own local
memory, and there’s no shared memory.
o Communication through messages: Machines
communicate with each other using network protocols (e.g.,
TCP/IP, RPC).
o Geographically dispersed: Machines may be in different
locations, leading to potential delays due to network latency.
Example:
o Cloud computing platforms like AWS or Google Cloud use
distributed systems where multiple servers (distributed
across the globe) work together to provide services like data
storage, processing, or hosting applications.
Coupling
The term coupling is associated with the configuration
and design of processors in a multiprocessor system.
The degree of coupling among a set of modules, whether hardware or software, is measured in
terms of the interdependency and binding and/or homogeneity
among the modules.
Asynchronous Execution:
A communication among processes is considered
asynchronous, when every communicating process can have
a different observation of the order of the messages being
exchanged. In an asynchronous execution:
there is no processor synchrony and there is no bound on
the drift rate of processor clocks
message delays are finite but unbounded
no upper bound on the time taken by a process
Synchronous Execution:
In synchronous execution, tasks or processes must wait
for one another to complete before continuing. When one
process sends a request or message, it waits for a response
before proceeding. A communication among processes is
considered synchronous when every process observes the
same order of messages within the system. In the same
manner, the execution is considered synchronous, when
every individual process in the system observes the same
total order of all the processes which happen within it. In an
synchronous execution:
processors are synchronized and the clock drift rate between
any two processors is bounded
message delivery times are such that they occur in one logical step
or round
upper bound on the time taken by a process to execute a step.
Clock synchronization
Example:
Consider a distributed banking system with multiple branches,
each having its own database. The global state would
represent the total balance across all branches at any given
moment. Tracking this state accurately would be important for
ensuring the correctness of any transactions performed in the
system.
Example:
If either part of the transaction fails (e.g., the inventory update fails),
the entire transaction should be rolled back, and the order should not
be created, ensuring that both databases remain in a consistent state.