0% found this document useful (0 votes)
20 views19 pages

Chapter 3 Processes

Hhhjj
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
20 views19 pages

Chapter 3 Processes

Hhhjj
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 19

Chapter 3: Processes

Process
 An operating system creates a number of virtual processors, each one for running a different
program.
 The operating system has a process table to keep track of these virtual processors (PCB).
 The process table contains entries to store CPU register values, memory maps, open files,
accounting information, privileges, etc.
 A process is a running instance of a program, including all variables and other state attributes on
one of the operating system's virtual processors.
 The operating system ensures that independent processes cannot affect each other's behavior.
 Sharing the same CPU and other hardware resources is made transparent with hardware support
to enforce this separation.
 Each time a process is created, the operating system must create a complete independent address
space.
 Example: zeroing a data segment, copying the associated program into a text segment,
and setting up a stack for temporary data.
 Switching the CPU between two processes requires:
 Saving the CPU context (which consists of register values, program counter, stack
pointer, etc.),
 Modifying registers of the memory management unit (MMU)
 Invalidate address translation caches such as in the translation lookaside buffer (TLB) a
cache in a CPU that is used to improve the speed of virtual address translation.
 If the operating system supports more processes than it can simultaneously hold in main
memory, it may have to swap processes between main memory and disk before the actual
switch can take place.

Threads

 A thread is a basic unit of CPU utilization, consisting of a program counter, a stack, and a set of
registers and a thread ID.
 Traditional (heavyweight) processes have a single thread of control - There is one program counter,
and one sequence of instructions that can be carried out at any given time.
 As shown in Figure 1, multi-threaded applications have multiple threads within a single process,
each having their own program counter, stack and set of registers, but sharing common code, data,
and certain structures such as open files.

Dadhi R. Ghimire [email protected] Patan Multiple Campus


Figure 1 - Single-threaded and multithreaded processes
Motivation
 Threads are very useful in modern programming whenever a process has multiple tasks to perform
independently of the others.
 This is particularly true when one of the tasks may block, and it is desired to allow the other tasks to
proceed without blocking.
 For example in a word processor, a background thread may check spelling and grammar while a
foreground thread processes user input ( keystrokes ), while yet a third thread loads images from the
hard drive, and a fourth does periodic automatic backups of the file being edited.
 Another example is a web server - Multiple threads allow for multiple requests to be satisfied
simultaneously, without having to service requests sequentially or to fork off separate processes for
every incoming request. ( The latter is how this sort of thing was done before the concept of threads
was developed. A daemon would listen at a port, fork off a child for every incoming request to be
processed, and then go back to listening to the port. )

Thread Types
User Threads
 Threads are implemented at the user level by a thread library
 Library provides support for thread creation, scheduling and management.
 User threads are fast to create and manage.
Kernel Threads
 Supported and managed directly by the OS.
 Thread creation, scheduling and management take place in kernel space.
 Slower to create and manage.

Thread Usage in Non-Distributed Systems

Benefits of multithreaded processes:


 Increased responsiveness to user:

Dadhi R. Ghimire [email protected] Patan Multiple Campus


 A program continues running with other threads even if part of it is blocked or
performing a lengthy operation in one thread.
 Resource Sharing
 Threads share memory and resources of their process.
 Economy
 Less time consuming to create and manage threads than processes as threads share
resources,
 Example: Thread creation is 30 times faster than process creation in Solaris.
 Utilization of Multiprocessor Architectures
 Increases concurrency because each thread can run in parallel on a different processor.
 Many applications are easier to structure as a collection of cooperating threads.
 e.g., word processor - separate threads can be used for handling user input, spelling and
grammar checking, document layout, index generation, etc.

Thread Implementation
 Threads are provided in the form of a thread package.
 The package contains operations to create and destroy threads as well as operations on
synchronization variables such as mutexes and condition variables.
 Two approaches to implement a thread package.
1. Construct a thread library that is executed entirely in user mode.
Advantages:
 It is cheap to create and destroy threads
 All thread administration is kept in the user's address space, the price of creating a thread is
primarily determined by the cost for allocating memory to set up a thread stack
 Destroying a thread mainly involves freeing memory for the stack, which is no longer used.
 Switching thread context can be done in just a few instructions
Disadvantage:
 A blocking system call will immediately block the entire process to which the thread belongs,
and thus also all the other threads in that process
2. Have the kernel be aware of threads and schedule them.
Advantages
 Eliminates blocking problem.
Disadvantage:
 Every thread operation (creation, deletion, synchronization, etc.), will have to be carried out by
the kernel, requiring a system call.

Multithreading Models
Three common ways of establishing a relationship between user level threads and kernel-level threads
1. Many-to-One: Many user-level threads mapped to single kernel thread.
 Easier thread management.
 Blocking-problem.
 No concurrency.

Dadhi R. Ghimire [email protected] Patan Multiple Campus


 Examples: Green threads for Solaris

2. One-to-One: Each user-level thread maps to a kernel thread.


 Overhead of creating kernel threads, one for each user thread.
 No blocking problem
 Provides concurrency.
 Examples: Linux, family of Windows

3. Many-to-Many: It allows many user level threads to be mapped to many kernel threads.
 Allows the OS to create a sufficient number of kernel threads.
 Users can create as many as user threads as necessary.
 No blocking and concurrency problems.
 Two-level model.

Thread Usage in Distributed Systems


Multithreaded Clients
To establish a high degree of distribution transparency, distributed systems that operate in wide-area
networks may need to conceal long inter-process message propagation times. The round-trip delay in a
wide-area network can easily be in the order of hundreds of milliseconds, or sometimes even seconds.
The usual way to hide communication latencies is to initiate communication and immediately proceed
with something else. A typical example where this happens is in Web browsers. In many cases, a Web
document consists of an HTML file containing plain text along with a collection of images, icons, etc.
To fetch each element of a Web document, the browser has to set up a TCP/IP connection; read the
incoming data, and pass it to a display component. Setting up a connection as well as reading incoming
data is inherently blocking operations. When dealing with long-haul communication, we also have the
disadvantage that the time for each operation to complete may be relatively long.
A Web browser often starts with fetching the HTML page and subsequently displays it. To hide
communication latencies as much as possible, some browsers start displaying data while it is still
coming in. While the text is made available to the user, including the facilities for scrolling and such,
the browser continues with fetching other files that make up the page, such as the images. The latter are
displayed as they are brought in. The user need thus not wait until all the components of the entire page
are fetched before the page is made available.

Multithreaded Servers
The main use of multithreading in distributed systems is found at the server side. Practice shows that
multithreading not only simplifies server code considerably, but also makes it much easier to develop
servers that exploit parallelism to attain high performance, even on uniprocessor systems.
To understand the benefits of threads for writing server code, consider the organization of a he server
that occasionally has to block waiting for the disk. The file server normally waits for an incoming
request for a file operation, subsequently carries out the request, and then sends back the reply. One
possible and particularly popular organization is shown in Figure 2. Here one thread, the dispatcher,
reads incoming requests for a file operation. The requests are sent by clients to a well-known end point

Dadhi R. Ghimire [email protected] Patan Multiple Campus


for this server. After examining the request, the server chooses an idle (i.e. blocked) worker thread and
hands it the request.

Figure 2: A multithreaded server organized in a dispatcher/worker model.

The worker proceeds by performing a blocking read on the local file system, which may cause the
thread to be suspended until the data are fetched from disk. If the thread is suspended, another thread is
selected to be executed. For example, the dispatcher may be selected to acquire more work.
Alternatively, another worker thread can be selected that is now ready to run.
Now consider how the file server might have been written in the absence of threads. One possibility is
to have it operate as a single thread. The main loop of the file server gets a request, examines it, and
carries it out to completion before getting the next one. While waiting for the disk, the server is idle and
does not process any other requests. Consequently, requests from other clients cannot be handled. In
addition, if the file server is running on a dedicated machine, as is commonly the case, the CPU is
simply idle while the file server is waiting for the disk. The net result is that many fewer requests per
time unit can be processed. Thus threads gain considerable performance, but each thread is
programmed sequentially, in the usual way.

Virtualization
 Virtualization is a broad term that refers to the abstraction of computer resources.
 Virtualization creates an external interface that hides an underlying implementation

Dadhi R. Ghimire [email protected] Patan Multiple Campus


Figure 3: (a) General organization between a program, interface, and system and (b) General
organization of virtualizing system A on top of system B.

Virtualization can be divided into two main categories:


1. Platform virtualization involves the simulation of virtual machines.
 Platform virtualization is performed on a given hardware platform by "host" software (a control
program), which creates a simulated computer environment (a virtual machine) for its "guest"
software.
 The "guest" software, which is often itself a complete operating system, runs just as if it were
installed on a stand-alone hardware platform.

2. Resource virtualization involves the simulation of combined, fragmented, or simplified


resources.
 Virtualization of specific system resources, such as storage volumes, name spaces, and network
resources.

Role of Virtualization in Distributed Systems


Issue:
 While hardware and low-level systems software change reasonably fast, software at higher levels of
abstraction (e.g., middleware and applications), are much more stable - legacy software cannot be
maintained in the same pace as the platforms it relies on.
Solution:
 Virtualization can help here by porting the legacy interfaces to the new platforms and thus
immediately opening up the latter for large classes of existing programs.

Issue:
 Networking has become completely pervasive.

Dadhi R. Ghimire [email protected] Patan Multiple Campus


 Connectivity requires that system administrators maintain a large and heterogeneous collection of
server computers, each one running very different application, which can be accessed by clients.
Solution:
 The diversity of platforms and machines can be reduced by letting each application run on its own
virtual machine, possibly including the related libraries and operating system, which, in turn, run on
a common platform.

Issue:
 Management of content delivery networks that support replication of dynamic content becomes
easier if edge servers supported virtualization, allowing a complete site, including its environment
to be dynamically copied.
 As we will discuss later, it is primarily such portability arguments that
Solution:
 Virtualization provides a high degree of portability and flexibility making it an important
mechanism for distributed systems.

Architectures of Virtual Machines


Four distinct levels of interfaces to computers the behavior of which that virtualization can mimic:
 An interface between the hardware and software, consisting of machine instructions that can be
invoked by any program.
 An interface between the hardware and software, consisting of machine instructions that can be
invoked only by privileged programs, such as an operating system.
 An interface consisting of system calls as offered by an operating system.
 An interface consisting of library calls, generally forming what is known as an application
programming interface (API). In many cases, the aforementioned system calls are hidden by an
API.

Figure 4: Various interfaces offered by computer systems.

Dadhi R. Ghimire [email protected] Patan Multiple Campus


Virtualization can take place in two different ways:
1. Process Virtual Machine
 Build a runtime system that provides an abstract instruction set that is to be used for executing
applications.
 Instructions can be interpreted (as is the case for the Java runtime environment), but could also
be emulated as is done for running Windows applications on UNIX platforms.

2. Virtual Machine Monitor (VMM)


 Implemented as a layer completely shielding the original hardware, but offering the complete
instruction set of that same (or other hardware) as an interface.
 This interface can be offered simultaneously to different programs.
 Possible to have multiple, and different operating systems run independently and concurrently
on the same platform.
 Example: VMware and Xen
 It can be further divided into Native Virtual Machine Monitor and a Hosted Virtual Machine
Monitor

Figure 5: (a) A process virtual machine, with multiple instances of (application, runtime) combinations
and (b) A Native virtual machine monitor, with multiple instances of (applications, operating system)
combinations and (c) A Hosted Virtual Machine Monitor .

 VMMs will become increasingly important in the context of reliability and security for (distributed)
systems.
 Since they allow for the isolation of a complete application and its environment, a failure caused
by an error or security attack need no longer affect a complete machine.
 Portability is greatly improved as VMMs provide a further decoupling between hardware and
software, allowing a complete environment to be moved from one machine to another.

Clients
Networked User Interfaces
Two ways to support client-server interaction:

Dadhi R. Ghimire [email protected] Patan Multiple Campus


1. For each remote service - the client machine will have a separate counterpart that can contact the
service over the network. Example: an agenda running on a user's PDA that needs to synchronize with
a remote, possibly shared agenda. In this case, an application-level protocol will handle the
synchronization, as shown in Figure 6 (a).
2. Provide direct access to remote services by only offering a convenient user interface. The client
machine is used only as a terminal with no need for local storage, leading to an application neutral
solution as shown in Figure 6 (b). It is thin-client approach; everything is processed and stored at the
server

Figure 6 (a) A networked application with its own protocol and (b) A general solution to allow access
to remote applications.
Example: The X Window System (X)
 Used to control bit-mapped terminals, which include a monitor, keyboard, and a pointing device
such as a mouse.
 Viewed as that part of an operating system that controls the terminal.
 X kernel is heart of the system.
 Contains all the terminal-specific device drivers - highly hardware dependent.
 X kernel offers a low-level interface for controlling the screen and for capturing events
from the keyboard and mouse.
 This interface is made available to applications as a library called Xlib.

Figure 7: The basic organization of the X Window System.

 X kernel and the X applications need not necessarily reside on the same machine.

Dadhi R. Ghimire [email protected] Patan Multiple Campus


 X provides the X protocol, which is an application-level communication protocol by which an
instance of Xlib can exchange data and events with the X kernel.
Example:
 Xlib can send requests to the X kernel for creating or killing a window, setting colors, and
defining the type of cursor to display, among many other requests.
 The X kernel will react to local events such as keyboard and mouse input by sending event
packets back to Xlib.

 Several applications can communicate at the same time with the X kernel.
 One specific application that is given special rights - the window manager (WM).
 WM can dictate the "look and feel" of the display as it appears to the user.
 The window manager can prescribe how each window is decorated with extra
buttons, how windows are to be placed on the display, and so.
 Other applications will have to adhere to these rules.

How the X window system fits into client-server computing?


· The X kernel receives requests to manipulate the display.
· The X kernel acts as a server, while the applications play the role of clients.
· This terminology has been adopted by X, and although strictly speaking is correct, it can easily
lead to confusion.

Thin-Client Network Computing


 Applications manipulate a display using the specific display commands as offered by X.
 These commands are sent over the network where they are executed by the X kernel on the server.
Issue:
 Applications written for X should separate application logic from user-interface commands
 This is often not the case - much of the application logic and user interaction are tightly coupled,
meaning that an application will send many requests to the X kernel for which it will expect a
response before being able to make a next step.
 Synchronous behavior adversely affects performance when operating over a wide-area network
with long latencies.

Servers
General Design Issues

 A server is a process implementing a specific service on behalf of a collection of clients.


 Each server is organized in the same way:
 It waits for an incoming request from a client
 Ensures that the request is fulfilled
 It waits for the next incoming request.

Several ways to organize servers:

Dadhi R. Ghimire [email protected] Patan Multiple Campus


Iterative server:
Iterative server handles request, then returns results to the client; any new client requests must wait for
previous request to complete (also useful to think of this type of server as sequential).

Concurrent server
Concurrent server does not handle the request itself; a separate thread or sub-process handles the
request and returns any results to the client; the server is then free to immediately service the next client
(i.e., there’s no waiting, as service requests are processed in parallel).
 A multithreaded server is an example of a concurrent server.
 An alternative implementation of a concurrent server is to fork a new process for each new
incoming request.
 This approach is followed in many UNIX systems.
 The thread or process that handles the request is responsible for returning a response to the
requesting client.

Where do clients contact a server?


 Clients send requests to an end point, also called a port, at the machine where the server is
running.
 Each server listens to a specific end point.

How do clients know the end point of a service?


1. Globally assign end points for well-known services.
Examples:
 Servers that handle Internet FTP requests always listen to TCP port 21.
 An HTTP server for the World Wide Web will always listen to TCP port 80.
 These end points have been assigned by the Internet Assigned Numbers Authority (IANA).
 With assigned end points, the client only needs to find the network address of the machine
where the server is running.

2. Many services that do not require a pre-assigned end point.


Example: A time-of-day server may use an end point that is dynamically assigned to it by its local
operating system.
 A client will look need to up the end point.
Solution:
 A daemon running on each machine that runs servers.
 The daemon keeps track of the current end point of each service implemented by a co-located
server.
 The daemon itself listens to a well-known end point.
 A client will first contact the daemon, request the end point, and then contact the specific server
(Figure 8(a))

Dadhi R. Ghimire [email protected] Patan Multiple Campus


Figure 8: (a) Client-to-server binding using a daemon (b) Client-to-server binding using a super-server

 Common to associate an end point with a specific service


 Implementing each service by means of a separate server may be a waste of resources.
Example: UNIX system
 Many servers run simultaneously, with most of them passively waiting for a client request.
 Instead of having to keep track of so many passive processes, it is often more efficient to have a
single super-server listening to each end point associated with a specific service, (Figure 8 (b))
 This is the approach taken, with the inetd daemon in UNIX.
 Inetd manages Internet services by listening to a number of well-known ports for these
services.
 When a request comes in, the daemon forks a process to take further care of the request.
 That process will exit after it is finished.

Design issue
State of server: A stateless server is a server that treats each request as an independent transaction that
is unrelated to any previous request. A stateless server does not keep information on the state of its
clients, and can change its own state without having to inform any client. Example: Web server is
stateless.
 It merely responds to incoming HTTP requests, which can be either for uploading a file to the
server or (most often) for fetching a file.
 When the request has been processed, the Web server forgets the client completely.
 The collection of files that a Web server manages (possibly in cooperation with a file server),
can be changed without clients having to be informed.

A stateful server remembers client data (state) from one request to the next.
 Information needs to be explicitly deleted by the server.
Example:
 A file server that allows a client to keep a local copy of a file, even for performing update
operations.
 The server maintains a table containing (client, file) entries.
 This table allows the server to keep track of which client currently has the update permissions
on which file and the most recent version of that file.
 Improves performance of read and write operations as perceived by the client.

Dadhi R. Ghimire [email protected] Patan Multiple Campus


Advantages / Disadvantages
 Using a stateless file server, the client must specify complete file names in each request specify
location for reading or writing re-authenticate for each request
 Using a stateful file server, the client can send less data with each request
 A stateful server is simpler
 A stateless server is more robust: lost connections can't leave a file in an invalid state rebooting the
server does not lose state information rebooting the client does not confuse a stateless server

Server Clusters
General Organization

A server cluster is a collection of machines connected through a network, where each machine runs one
or more servers. A server cluster is logically organized into three tiers

Figure 9: Server Cluster

First Tier - Consists of a (logical) switch through which client requests are routed.
Switches vary:
 Transport-layer switches accept incoming TCP connection requests and pass requests on to one
of servers in the cluster,
 A Web server that accepts incoming HTTP requests, but that partly passes requests to
application servers for further processing only to later collect results and return an HTTP
response.

Second Tier: Application Processing.


Cluster computing servers run on high-performance hardware dedicated to delivering compute power.
Enterprise server clusters - applications may need to run on relatively low-end machines, as the
required compute power is not the bottleneck, but access to storage is.

Third Tier: Data-Processing Servers - Notably File and Database Servers.

Dadhi R. Ghimire [email protected] Patan Multiple Campus


 These servers may be running on specialized machines, configured for high-speed disk access and
having large server-side data caches.

Issue:
When a server cluster offers multiple different machines may run different application servers.
 The switch will have to be able to distinguish services or otherwise it cannot forward requests to
the proper machines.
 Many second-tier machines run only a single application.
 This limitation comes from dependencies on available software and hardware, but also that
different applications are often managed by different administrators.
 Consequence - certain machines are temporarily idle, while others are receiving an overload of
requests.

Solution:
Temporarily migrate the services to idle machines to balance load. Use virtual machines allowing a
relative easy migration of code to real machines.

The Switch
Design goal for server clusters: Access transparency i.e. client applications running on remote machines
should not know the internal organization of the cluster.

Implementation: A single access point employing a dedicated machine. The switch forms the entry
point for the server cluster, offering a single network address.

Standard way of accessing a server cluster: A TCP connection over which application-level requests
are sent as part of a session. A session ends by tearing down the connection. The switch accepts
incoming TCP connection requests, and hands off such connections to one of the servers.
 When the switch receives a TCP connection request, it identifies the best server for handling
that request, and forwards the request packet to that server.
 The server will send an acknowledgment back to the requesting client, but inserting the switch's
IP address as the source field of the header of the IP packet carrying the TCP segment.

Figure 10: The Switch is performing load balancing

Dadhi R. Ghimire [email protected] Patan Multiple Campus


Code Migration
Reasons for Migrating Code

Traditionally Code Migration in distributed system took place in the form of process migration, in
which an entire process is moved from one machine to another. The reason for doing so: overall system
performance can be improved if processes are moved from heavily-loaded to lightly-loaded machines.
 Load is expressed in terms of the CPU queue length or CPU utilization but moving a running
process to a different machine is a costly and intricate task.
 In Many modern distributed systems, optimizing computing capacity is less an issue than
minimizing communication.
 Platform and network heterogeneity make decisions for performance improvement through code
migration based on qualitative reasoning instead of mathematical models.

Examples:
Client-server system where server manages a huge database
 If a client application needs to perform many database operations involving large quantities of data,
it may be better to ship part of the client application to the server and send only the results across
the network.
 Otherwise, the network may be swamped with the transfer of data from the server to the client. In
this case, code migration is based on the assumption that it generally makes sense to process data
close to where those data reside.

Migrating parts of the server to the client


 Many interactive database applications clients need to fill in forms that are subsequently translated
into a series of database operations.
 Processing the form at the client side, and sending only the completed form to the server, can avoid
that a relatively large number of small messages need to cross the network.
 Result - the client perceives better performance, while at the same time the server spends less time
on form processing and communication.

Reason for doing so: Flexibility - It is possible to dynamically configure distributed systems.
Example - Client / Server application
 Traditional Implementation - server implements a standardized interface to a file system.
 Remote clients communicate with the server using a proprietary protocol.
 The client-side implementation of the file system interface needs to be linked with the client
application.
 Approach requires the software be readily available to the client at the time the client
application is being developed.

 Alternative Implementation - server provides the client's implementation no sooner than is


necessary - when the client binds to the server.

Dadhi R. Ghimire [email protected] Patan Multiple Campus


 The client dynamically downloads the implementation, goes through the necessary initialization
steps, and invokes the server.
 This model of dynamically moving code from a remote site does require that the protocol for
downloading and initializing code is standardized.
 The downloaded code must be executed on the client's machine.

Figure 11: The principle of dynamically configuring a client to communicate to a server.

The client first fetches the necessary software, and then invokes the server.
Advantages –
 Clients need not have all the software preinstalled to talk to servers.
 The software can be moved as required and discarded when no longer needed.
 With standardized interfaces the client-server protocol and its implementation can be changed at
will.
 Changes will not affect existing client applications that rely on the server.

Disadvantages: Security
Blindly trusting that the downloaded code; implements only the advertised interface, while accessing an
unprotected hard disk.

Models for Code Migration


Code migration in the broadest sense deals with moving programs between machines, with the intention
to have those programs be executed at the target. In some cases, as in process migration, the execution
status of a program, pending signals and other parts of the environment must be moved as well.
A process consists of three segments. The code segment is the part that contains the set of instructions
that make up the program that is being executed. The resource segment contains references to external
resources needed by the process, such as files, printers, devices, other processes, and so on. Finally, an
execution segment is used to store the current execution state of a process, consisting of private data,
the stack, and, of course; the program counter.

Dadhi R. Ghimire [email protected] Patan Multiple Campus


A further distinction can be made between sender-initiated and receiver initiated migration. In
sender-initiated migration, migration is initiated at the machine where the code currently resides or is
being executed. Typically, sender initiated migration is done when uploading programs to a compute
server. Another example is sending a query, or batch of queries, to a remote database server. In
receiver-initiated migration, the initiative for code migration is taken by the target machine. Java
applets are an example of this approach.

Receiver-initiated migration is simpler than sender-initiated migration. In many cases, code migration
occurs between a client and a server, where the client takes the initiative for migration. Securely
uploading code to a server, as is done in sender-initiated migration, often requires that the client has
previously been registered and authenticated at that server. In other words, the server is required to
know all its clients; the reason being is that the client will presumably want access to the server's
resources such as its disk. Protecting such resources is essential. In contrast, downloading code as in the
receiver-initiated case can often be done anonymously. Moreover, the server is generally not interested
in the client's resources. Instead, code migration to the client is done only for improving client-side
performance. To that end, only a limited number of resources need to be protected, such as memory and
network connections.

Figure 12: Four different paradigms for code mobility

Hence, the models of code migration can be categorized into:

Dadhi R. Ghimire [email protected] Patan Multiple Campus


 Simple client-server computing
 Remote evaluation,
 Code-on-demand
 Mobile agents

Migration in Heterogeneous Systems

Example: Real-time migration of a virtualized operating system

Dadhi R. Ghimire [email protected] Patan Multiple Campus


Dadhi R. Ghimire [email protected] Patan Multiple Campus

You might also like