0% found this document useful (0 votes)
33 views8 pages

Preface

Distributed Shared Memory (DSM) systems represent a successful hybrid of two parallel computer classes: shared memory multiprocessors and distributed computer systems. Distributed Shared Memory is the abstraction that supports the notion of shared memory in a physically non-shared (distributed) architecture.

Uploaded by

Rajeev Kumar
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
33 views8 pages

Preface

Distributed Shared Memory (DSM) systems represent a successful hybrid of two parallel computer classes: shared memory multiprocessors and distributed computer systems. Distributed Shared Memory is the abstraction that supports the notion of shared memory in a physically non-shared (distributed) architecture.

Uploaded by

Rajeev Kumar
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 8

PREFACE

Distributed shared memory (DSM) systems represent a successful hybrid of two parallel computer classes: shared memory multiprocessors and distributed computer systems. They provide the shared memory abstraction in systems with physically distributed memories, and consequently combine the advantages of both approaches. Because of that, the concept of distributed shared memory is recognized as one of the most attractive approaches for building large-scale, highperformance multiprocessor systems. The increasing importance of this subject imposes the need for its thorough understanding. Distributed Shared Memory (DSM) is the abstraction that supports the notion of shared memory in a physically non-shared (distributed) architecture. Problems in this area logically resemble the cache coherence-related problems in shared memory multiprocessors. The closest relation can be established between algorithmic issues of DSM coherence maintenance strategies and cache coherence maintenance strategies in shared memory multiprocessors.

CONTENTS

CHAPTER 1 INTRODUCTION CHAPTER 2 SYTEM V SHARED MEMORY CHAPTER 3 DSM CONCEPTS AND ISSUES 3.1 DISTRIBUTED SYSTEMS 3.2 THE ADVANTAGES OF DSM PROGRAMMING MODEL 3.3 DSM IMPLEMENTATION ISSUES 3.3.1 DSM IMPLEMENTATION ALGORITHMS 3.3.2 NETWORK COMMUNICATION 3.3.3 CONSISTENCY MODELS 3.3.4 DATA GRANULARITY 3.3.5 THE COHERENCE PROTOCOLS CHAPTER 4 ARCHITECTURAL OVERVIEW 4.1 PROGRAMMING MODEL 4.2 SYSTEM STRUCTURE 4.2.1 DSM SUBSYSTEM 4.2.2 DSM SERVER 4.2.3 KEY SERVER 4.3 COMMUNICATION ISSUES 4.3.1USER LEVEL PROCESS - DSM SUBSYSTEM COMMUNICATION 4.3.2 DSM SUBSYSTEM - DSM SERVER COMMUNICATION 4.3.3COMMUNICATION BETWEEN DSM SERVERS 4.3.4 DSM SERVER - KEY SERVER COMMUNICATION CHAPTER 5 DSM PROTOCOL 5.1 TERMINOLOGY: 5.1.1 MANAGER 5.1.2 OWNER 5.1.3 OWNER HINT 5.1.4 COPIES HINT 5.1.5 VERSION HINT 5.2. READING A DSM PAGE 5.3 WRITING A DSM PAGE CHAPTER 6 DATA STRUCTURES FOR DSM 6.1 MEMORY MANAGEMENT 6.1.1VIRTUAL MEMORY

6.2 DATA STRUCTURES FOR SYSTEM V SHARED MEMORY 6.3 DATA STRUCTURES TO SUPPORT DSM. CHAPTER 7 IMPLEMENTATATON 7.1 PSEUDOCODE FOR SHMGET, SHMAT, SHMDT 7.2 EXAMPLE USING A TYPICAL SCENARIO 7.3 PAGE FAULT EXCEPTION HANDLER 7.3.1. HOW DSM PAGE FAULT IS HANDLED CHAPTER 8 SUMMARY REFERENCES:

Chapter 1 Introduction
The distributed shared-memory programming paradigm has been receiving rising attention. Recent developments have resulted in viable distributed shared memory languages that are gaining vendors support, and several early compilers have been developed. This programming model has the potential of achieving a balance between ease-of-programming and performance. As in the shared-memory model, programmers need not to explicitly specify data accesses. Meanwhile, programmers can exploit data locality using a model that enables the placement of data close to the threads that process them, to reduce remote memory accesses. In this compilation, we present the fundamental concepts associated with this programming model. These include execution models, synchronization, and memory consistency. With DSM, local as well as remote memories can be accessed in a uniform manner, with the location of the shared region transparent to the application program. This report discusses our experience in implementing a DSM system in a paged virtual memory operating system environment. Close co-operation between the virtual memory management system and the mechanisms that make DSM possible is essential to ensure good performance of the DSM system and to make viable the use of shared memory system in a distributed environment. Traditionally, in the uniprocessor Operating System the Resource Managers (systems view) was a processor, information, memory and devices and the Extended Machines view (users view) were the issues relating to virtual concepts with the goal being efficiency, flexibility and robustness. With the recent technological advances such as low-cost high performance PCs and high-speed network interconnection there are several new applications emerging such as Concurrent/parallel systems, distributed systems and Collaborative systems. The Distributed Systems supporting new applications have revolutionized the user and system level perspective. The users view is now transparency: a single computer view of multiple computer systems. The systems view is now of Distribution: a decentralized/autonomous, cooperative/collaborative body.

Traditionally, shared memory and message passing have been the two paradigms used for interprocess communications and synchronization in multiprocess computations. The primitive forms of IPC i.e. lock files, signals and pipes are restrictive whereas semaphores and shared memory is more appealing than message passing since in the latter, communication between processes is in the form of messages explicitly exchanged between the processes, whereas in the former, communication is affected through variables shared between the processes concerned with shared memory. Also data can be accessed strictly in the order in which they are sent. DSM is a major area of interest as it provides a better price/performance ratio, improved speed and reliability but problems arise due to communication delays and consistency. The issues that need to be addressed are complex and impinge on kernel design. This project report is an account of our work in implementing a DSM system that preserver the interface that system V (non distributed) shared memory

Figure 2.1 Illustrating concept of shared memory communication.

Chapter 2. SYSTEM V SHARED MEMORY


In this chapter, we explain a few of the system V shared memory system calls relevant to our implementation of DSM. The basic idea in shared memory systems is illustrated in Fig 2.1. The mapping of shared memory has to be accomplished through system calls since otherwise address spaces of distinct processes are disjoint. int shmget (key_t key, int size, int shmflg) This system call is used to create a shared memory region. key is used to generate a unique shared memory identifier. Processes that wish to share a memory region use the same key. size specifies the size of the region in bytes. shmflg specifies the region creation conditions. A few of them are mentioned below. IPC_CREAT: creates a shared region if one with the key does not already exist. IPC_EXCL: creates a new shared region with the given key and if one already exists with the given key, it flags an error. An important point to note is that no region is actually allocated in this call, only an entry is made in the system data structure (shmid_kernel) and its index into the table of descriptors ( shm_seg[] ) is returned. void * shmat (int shmid, void * shmaddr, int shmflg); shmat maps the region into the process address space. It is here that page tables are actually allocated. Pages are allocated only when page faults occur. shmid is the unique identifier returned by shmget. The shmaddr field provides the process flexibility in assigning the address. The shared memory region is assigned space between the stack and the heap space. If address field is zero then it is assigned by the system. shmflg specifies the alignment of the shared region. A few values are mentioned below. SHM_SHARE_MAU: first available aligned address. SHM_RND: attach the nearest page address. int shmdt (void * shmaddr); shmdt unmaps the region from the virtual address space of the process, undoing whatever shmat had earlier accomplished. int shmctl (int shmid, int cmd, struct_ds * but);

cmd specifies the operation to be performed. For example, to obtain the status of a memory region, to set permissions and to remove a shared memory region. IPC_STAT: returns current values of shmid_kernel IPC_SET: modifies a number of members found in the shmid_kernel. Figure 2.2 shows the various data structures when two processes are sharing memory. Every process occupies one entry in the process table i.e. task[]. The task_struct data structure gives the description of the characteristics of a process, whereas mm_struct holds the data needed for memory management. The vm_area_structs holds the attributes of a virtual memory area of the process and are connected in the form of an AVL tree. Every shared memory segment occupies one entry in the shm_seg[] table. The shmid_kernel data structure has a pointer to the vm_area_struct of the first process that has mapped the shared segment into its virtual memory space (process A). From this vm_area_struct there is a pointer to the next process that has attached the segment and so on. The page which is shared is mapped into the virtual area space of both the processes by entries for this page in the page tables of both processes. Chapter 6 discusses these data structures in greater detail.

Figure 2.2 Overview of data structures when processes use shared memory

You might also like