0% found this document useful (0 votes)
11 views4 pages

Adobe Scan 02-Oct-2023

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views4 pages

Adobe Scan 02-Oct-2023

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 4

1.

5 MESSAGE-PASSING SYSTEMS VERSUS SHARED MEMORY SYSTEMS


Communication among processors takes place via shared data variables, and
control variables for synchronization among the processors. The commnunications
between the tasks in multiprocessor systems take place through two main modes:

Message passing syste ms:


This allows mulinleprocesses to read and wrúe data to the message queue
without being connected to each other.
Messages are stored on the queue ntil their recipient retrieves them. Message
queues are quite useful for inter process communication and are used by most
operating systems.
Shared memory systems:
The shared memory is the memory that can be simultaneously accessed by
multiple processes. This is done so that the processes can communicate with each
other.
Communication among processors takes place through shared data variables, and
control variables for synchronization among the processors. on shared
Semaphores and monitorS are common synchronization mechanisms
memory systems.
environment, it is
" When shared memory model is implemented in a distributed
termed as distributed shared memory.
processA process A

processB shared memory


process B

message queue
mo m m2 mn
kernel
kernel

a) Message Passing Model b) Shared Memory Model

Fig 1.11: Inter-process communication models

Differences between message passing and shared memory models


Message Passing Distributed Shared Memory
Services Offered:
Variables have to be marshalled The processes share variables directly, so no
from one process, transmitted and marshalling and unmarshalling Shared
unmarshalled into other variables at the variables can be named, stored and accessed in
receiving process DSM.
App
PalAees
Processes can communicate with otherdoes not have privateaddress
proccsscs. Thcy can be protccted from onc spacc. So onc proccss can alter thc cxccution
another by having private address spaces of other.
This cannot be used to heterogeneous
This technique can be used in heterogeneous computers.
computers.
Synchronization is through locks and
Synchronization between processes is through
message passing primitives. semaphores.
Processes commuiicaling via message passSIng Processes communicating through DSM
must execute at the same time. may execute with non-overlapping lifetimes.
Efficiency: Any particular read or update may or may not
All remote data accesses are explicit and Involve communication by the underlying
therefore the programmer is always aware of runtime support.
whethera particular operation is in-process or
involves the expense of communication
L.8 DESIGN ISSUES AND CHALLENGES IN DISTRIBUTED SYSTEMS
The design of distributed systems has numerous challenges. They can be categorized
into:
Issues related to system and operating systems design
Issues related to algorithm design
Issues arising due to emerging technologies
The above three classes are not mutually exclusive.

L.8.1 Issues related to system and operating systems design


The following are some of the common challenges to be addressed in designing a
distributcd system from system perspective:
Communication: This task involves designing suitable communication mechanisms
among the various processes in the networks.
Examples: RPC, RMI
Processes:The main challenges involved are process and thread management at both
client and server environments, migration of code hetween systems, design of software and
mobile agents.
Naming: Devising easy to use and robust schemes for names, identifiers,and
addresses is cssential for locating resourccs and processes in a transparent and scalable
manner. The remote and highly varied geographical locations make this task difficult.
Synchronization: Mutual exclusion, lcader clection, deploying physical clocks,
global state recording are some synchronization mechanisms.
Data storage and access Schemes: Designing file systems for casy and efficient data
storage with implicit accessing mechanism is very much essential for distributed operation
Cunsistency and replication: The notion of Distributed syslems goes hand in hand
with replication of data, to provide high degree of scalability. The replicas should be handed
with care since data consistency is prime issue
Fault tolerance:This requires maintenance of fail proof links, nodes, and processes.
Some of the common fault tolerant techniques are resilience, reliable communication,
distributcd commit. checkpointing andrccovery, agrecment and consensus, failure detcction.
and self-stabilization.
Security:Cryptography, sccure channcls, access control, key management
generation and distribution, authorization, and secure group management are some of the
security measure that is imposed on distributed systems.
Applications Programming Interface (API) and transparency: Ihe user
Iriendliness and ease of use is very importanl to make the distributed services to be used by
wide community. Transparency, which is hiding inner implementation policy from users, is
of the following types:
Access transparency: hides differences in data representation
Location transparency: hides differences in locations y providing uniform access to
data located at remote locations.
Migration transparency: allows relocating resources without changing names.
Replication transparency: Makes the user unaware whether he is working on
originalCuncurrency dae OdeepZ
or replicated transparency: Ap
Masks he concurren use ol shared resources or the
User.
Failure transparency: system being reliable and fault-tolerant
Sealability and modularity: The algorithms, data and servicesmust be as distributed
as possible Various techniques such as replication,caching and cache management, and
asynchronous processing help toachieve scalability.

You might also like