Unit 6-Part2 - Parallel - Processing
Unit 6-Part2 - Parallel - Processing
Processing
Chapter 17
n Clusters
n Multiple processor organizations
n Types of parallel processor systems n Cluster configurations
n Parallel organizations n Operating system design issues
n Symmetric multiprocessors n Cluster computer architecture
n Organization n Blade servers
n Multiprocessor operating system
design considerations n Clusters compared to SMP
Uniprocessor
Clusters
Symmetric Nonumiform
Multiprocessor Memory
(SMP) Access
(NUMA)
(a) SISD DS
PU2 L M2
IS
CU
IS DS DS
CU1 PU1 PUn L Mn
Memory
Shared
IS DS
CU1 PU1 L M1
Interconnection
IS DS IS DS
CUn PUn CU2 PU2 L M2
Network
(c) MIMD (with shared memory)
Process 1
Process 2
Process 3
Process 1
Process 2
Process 3
Blocked Running
n Availability
n Since all processors can perform the same functions, failure of a
single processor does not halt the system
n Incremental growth
n User can enhance performance by adding additional processors
n Scaling
n Vendors can offer range of products based on number of
processors
Processor Processor Processor
I/O
I/O
Interconnection
Network
I/O
Main Memory
shared bus
Main I/O
Memory I/O Adapter
Subsytem
I/O
Adapter
I/O
Adapter
n Simplicity
n Simplest approach to multiprocessor organization
n Flexibility
n Generally easy to expand the system by attaching more
processors to the bus
n Reliability
n The bus is essentially a passive medium and the failure of any
attached device should not cause failure of the whole system
n Scheduling
n Any processor may perform scheduling so conflicts must be avoided
n Scheduler must assign ready processes to available processors
n Synchronization
n With multiple active processes having potential access to shared address spaces or I/O resources, care must be
taken to provide effective synchronization
n Synchronization is a facility that enforces mutual exclusion and event ordering
n Memory management
n In addition to dealing with all of the issues found on uniprocessor machines, the OS needs to exploit the available
hardware parallelism to achieve the best performance
n Paging mechanisms on different processors must be coordinated to enforce consistency when several processors
share a page or segment and to decide on page replacement
n Defined as:
n A group of interconnected whole computers working
together as a unified computing resource that can
create the illusion of being one machine
n (The term whole computer means a system that can run
on its own, apart from the cluster)
RAID
n Two approaches:
n Highly available clusters
n Fault tolerant clusters
n Failover
n The function of switching applications and data resources over from a failed system
to an alternative system in the cluster
n Failback
n Restoration of applications and data resources to the original system once it
has been fixed
n Load balancing
n Incremental scalability
n Automatically include new computers in scheduling
n Middleware needs to recognize that processes may switch between machines
Cluster Middleware
(Single System Image and Availability Infrastructur e)
Net. Interface HW Net. Interface HW Net. Interface HW Net. Interface HW Net. Interface HW
SMP Clustering
n Easier to manage and n Far superior in terms of
configure incremental and absolute
scalability
n Much closer to the original
single processor model for n Superior in terms of
which nearly all applications availability
are written
n All components of the system
n Less physical space and lower can readily be made highly
power consumption redundant
Thank You....!!!
All The Best