In parallel and distributed applications there are two common programming models for inter-process communication: shared memory and message passing. Shared memory has been the standard for tightly-coupled systems (multiprocessors), where the processors have uniform access to a single global memory. A large progress was recently made in the research and development of systems with multiple processors, capable of delivering high computing power in order to satisfy the constantly increasing demands of typical applications.
Overview of Distributed Shared Memory
Distributed shared memory (DSM) systems simplify the task of writing distributedmemory parallel programs by automating data distribution and communication. Unfortunately, DSM systems control memory and communication using fixed policies, even when programmers or compilers could manage these resources more efficiently. Distributed shared memory is an attempt to merge the ease of programming of shared memory, with the scalability of distributed memory architectures. Hiding the distribution aspects of an architecture for the programmer eases programming and especially porting existing, applications. DSM systems exist in a large number and with very varying behavior.
Distributed shared memory (DSM) is an abstraction used for sharing data between computers that do not share physical memory. Processes access DSM by reads and updates to what appears to be ordinary memory within their address space. However, an underlying runtime system ensures transparently that processes executing at different computers observe the updates made by one another. It is as though the processes access a single shared memory, but in fact the physical memory is distributed.
In recent years, researchers exploited the shared memory paradigm and studied its applicability to loosely-coupled systems. These efforts resulted in the introduction of a new concept that combines the better of the two basic models. This concept, commonly known as Distributed Shared Memory (DSM), refers to the abstraction of memory distributed over several systems, thus providing the illusion of a large “shared” memory. As illustrated in Figure 1, this global memory spans the private memories of the component processors and extends across machine boundaries. DSM allows processes executing on different interconnected processors to share memory by hiding the physical location(s) of data, making the memory location transparent to the entire system. An important benefit of this approach is that parallel programs developed for (real) shared memory systems can execute on distributed architectures with no modification.
Figure 1: Distributed Shared Memory Abstraction
Distributed shared memory (DSM) systems distribute and communicate data automatically. These systems provide the abstraction of one global, uniformly fast memory. Programs access data by referencing locations in this global memory space. Systems transparently fetch data, regardless of its physical location, to satisfy these references. At the same time, these systems replicate and migrate data dynamically across the nodes to keep values near the processors that reference them. Because the shared-memory abstraction matches the model that arises naturally on bus-based multiprocessors, DSM systems take advantage of programs written for—and programmers trained on—the much more common bus-based machines. Programmers can develop correct, working programs without considering data distribution or communication.
Advantages of Distributed Shared Memory
- Hide data movement and provide a simpler abstraction for sharing data. Programmers don’t need to worry about memory transfers between machines like when using the message passing model.
- Allows the passing of complex structures by reference, simplifying algorithm development for distributed applications.
- Takes advantage of “locality of reference” by moving the entire page containing the data referenced rather than just the piece of data.
- Cheaper to build than multiprocessor systems. Ideas can be implemented using normal hardware and do not require anything complex to connect the shared memory to the processors.
- Larger memory sizes are available to programs, by combining all physical memory of all nodes. This large memory will not incur disk latency due to swapping like in traditional distributed systems.
- Unlimited number of nodes can be used. Unlike multiprocessor systems where main memory is accessed via a common bus, thus limiting the size of the multiprocessor system.
- Programs written for shared memory multiprocessors can be run on DSM systems.
 Eskicioglu, M. Rasit, and Tony Marsland. “Distributed Shared Memory: A Review.” (1996).
 “Distributed Shared Memory”, Chapter 18 DSM.fm Page 750 Thursday, March 17, 2005 2:37 PM
 Jelica ProtiC Milo TomageviC* Veljko MilutinoviC, “A Survey of Distributed Shared Memory Systems”, Proceedings of the 28th Annual Hawaii International Conference on System Sciences – 1995
 “Distributed Shared Memory”, available online at: http://courses.cs.vt.edu/~cs5204/fall00/distributedSys/amento/dsm.html