L 14 DSM
L 14 DSM
u u
DSM idea
n
u u
motivation and the main idea consistency models F strict and sequential F causal F PRAM and processor F weak and release implementation of sequential consistency implementation issues F granularity F thrashing F page replacement
n n
all computers share a single paged, virtual address space pages can be physically located on any computer when process accesses data in shared address space a mapping manager maps the request to the physical page mapping manager kernel or runtime library if page is remote block the process and fetch it
Advantages of DSM
n
Simpler abstraction - programmer does not have to worry about data movement, may be easier to implement than RPC since the address space is the same easier portability - sequential programs can in principle be run directly on DSM systems possibly better performance u locality of data - data moved in large blocks which helps programs with good locality of reference u on-demand data movement u larger memory space - no need to do paging on disk flexible communication - no need for sender and receiver to exist, can join and leave DSM system without affecting the others process migration simplified - one process can easily be moved to a different machine since they all share the address space
DSM systems allow concurrent access to shared data concurrency may lead to unexpected results - what if the read does not return the value stored by the most recent write (write did not propagate)? Memory is coherent if the value returned by the read operation is always the value the programmer expected To maintain coherency of shared data a mechanism that controls (and synchronizes) memory accesses is used. This mechanism only allows a restricted set of memory access orderings memory consistency model - the set of allowable memory access orderings
Causal consistency
n n
strict consistency (strongest model) u value returned by a read operation is always the same as the value written by the most recent write operation u hard to implement sequential consistency (Lamport 1979) u the result of any execution of the operations of all processors is the same as if there were executed in some sequential order and one process operations are in the order of the program F Interleaving of operations doesnt matter, if all processes see the same ordering u read operation may not return result of most recent write operation! F running a program twice may give different results u little concurrency
n n
proposed (Hutto and Ahmad 1990) there is no single (even logical) ordering of operations two processes may see the same operations ordered differently the operations are sequenced in the same order if they are potentially causally related read/write (or two write) operations on the same item are causally related all operations of the same process are causally related causality is transitive - if a process carries out an operation B that causally depends on the preceding op A - all consequent ops by this process are causally related to A (even if they are on different items)
PRAM (Lipton & Sandberg 1988) u All processes see only memory writes done by a single process in the same (correct) order u PRAM = pipelined RAM F Writes done by a single process can be pipelined; it doesnt have to wait for one to finish before starting another F writes by different processes may be seen in different order on a third process u Easy to implement order writes on each processor independent of all others Processor consistency (Goodman 1989) u PRAM + u coherency on the same data item - all processes agree on the order of write operations to the same data item
Implementation issues
n n
Models differ by difficulty of implementation implement, ease of use, and performance Strict consistency most restrictive, but hard to implement Sequential consistency widely used, intuitive semantics, not much extra burden on programmer u But does not allow much concurrency Causal & PRAM consistency allow more concurrency, but have non-intuitive semantics, and put more of a burden on the programmer to avoid doing things that require more consistency Weak and Release consistency intuitive semantics, but put extra burden on the programmer
how to keep track of the location of remote data how to overcome the communication delays and high overhead associated with execution of communication protocols how to make shared data concurrently accessible at several nodes to improve system performance
Can a page move? be replicated? Nonreplicated, nonmigrating pages u All requests for the page have to be sent to the owner of the page u Easy to enforce sequential consistency owner orders all access request u No concurrency Nonreplicated, migrating pages u All requests for the page have to be sent to the owner of the page u Each time a remote page is accessed, it migrates to the processor that accessed it u Easy to enforce sequential consistency only processes on that processor can access the page u No concurrency
Replicated, migrating pages u All requests for the page have to be sent to the owner of the page u Each time a remote page is accessed, its copied to the processor that accessed it u Multiple read operations can be done concurrently u Hard to enforce sequential consistency must invalidate (most common approach) or update other copies of the page during a write operation Replicated, nonmigrating pages u Replicated at fixed locations u All requests to the page have to be sent to one of the owners of the page u Hard to enforce sequential consistency must update other copies of the page during a write operation
Granularity
n n
Thrashing
n
Granularity - size of shared memory unit Page-based DSM u Single page simple to implement u Multiple pages take advantage of locality of reference, amortize network overhead over multiple pages F Disadvantage false sharing Shared-variable DSM u Share only those variables that are need by multiple processes u Updating is easier, can avoid false sharing, but puts more burden on the programmer Object-based DSM u Retrieve not only data, but entire object data, methods, etc. u Have to heavily modify old programs
Occurs when system spends a large amount of time transferring shared data blocks from one node to another (compared to time spent on useful computation) u interleaved data access by two nodes causes data block to move back and forth u read-only blocks invalidated as soon as they are replicated handling thrashing u application specifies when to prevent other nodes from moving block - has to modify application u nailing block after transfer for a minimum amount of time t hard to select t, wrong selection makes inefficient use of DSM F adaptive nailing? u tailoring coherence semantics (Minin) to use object based sharing
Page replacement
n
What to do when local memory is full? u swap on disk? u swap over network? u what if page is replicated? u what if its read-only? u what if its read/write but clean(dirty)? u are shared pages given priority over private (non-shared)?