0% found this document useful (0 votes)
21 views16 pages

OS M UNIT 4-2 Part

Virtual memory management allows processes larger than physical memory by separating logical and physical memory. Demand paging loads pages only when needed, reducing memory usage and swap time, while page replacement algorithms like FIFO, Optimal, and LRU manage memory efficiently. Thrashing occurs when processes lack sufficient frames, leading to excessive paging activity, which can be mitigated using the working-set model and page-fault frequency control.

Uploaded by

sankarayaswitha
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
21 views16 pages

OS M UNIT 4-2 Part

Virtual memory management allows processes larger than physical memory by separating logical and physical memory. Demand paging loads pages only when needed, reducing memory usage and swap time, while page replacement algorithms like FIFO, Optimal, and LRU manage memory efficiently. Thrashing occurs when processes lack sufficient frames, leading to excessive paging activity, which can be mitigated using the working-set model and page-fault frequency control.

Uploaded by

sankarayaswitha
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 16

UNIT-4-ii part

Virtual Memory Management


Virtual memory is a technique that allows the
execution of processes that are not completely in
memory. One major advantage of this scheme is that
programs can be larger than physical memory.

Virtual Memory involves the separation of


logical memory as perceived by users from physical
memory. This separation allows an extremely large
virtual memory to be provided for programmers when
only a smaller physical memory is available. The
virtual address space of a process refers to the logical
(or virtual) view of how a process is stored in memory.
Note in Figure 9.2 that we allow for the heap to grow
upward in memory as it is used for dynamic memory
allocation. Similarly, we allow for the stack to grow
downward in memory through successive function calls.
The large blank space (or hole) between the heap and
the stack is part of the virtual address space but will
require actual physical pages only if the heap or stack
grows.

Demand Paging:
In order to a program, there are two possibilities. One option is complete program
might be loaded into memory at execution time. However, a problem with this approach is
that we may not initially need the entire program in memory. An alternative strategy is to
load pages only as they are needed. This technique is known as Demand paging and is
commonly used in virtual memory systems. With demand-paged virtual memory, pages are
only loaded when they are demanded during program execution; pages that are never
accessed are thus never loaded into physical memory.

A demand-paging system is similar


to a paging system with swapping (Figure
9.4) where processes reside in secondary
memory (usually a disk). When we want to
execute a process, we swap it into memory.
Rather than swapping the entire process into
memory, however, we use a lazy swapper. A
lazy swapper never swaps a page into
memory unless that page will be needed. A
swapper manipulates entire processes,
whereas a Pager is concerned with the
individual pages of a process. We thus use
pager, rather than swapper, in connection
with demand paging.

When a process is to be swapped in, the pager guesses which pages will be used
before the process is swapped out again. Instead of swapping in a whole process, the pager
brings only those pages into memory. Thus, it avoids reading into memory pages that will not
be used anyway, decreasing the swap time and the amount of physical memory needed.
With this scheme, we need some form of hardware support to distinguish between the
pages that are in memory and the pages that are on the disk. The valid -invalid bit scheme can
be used for this purpose. when this bit is set to “valid” the associated page is both legal and in
memory. If the bit is set to “invalid” the page either is not valid (that is, not in the logical
address space of the process) or is valid but is currently on the disk. This situation is depicted
in Figure 9.5.

Access to a page marked invalid causes a page fault. Page fault causes a trap to the
operating system. This trap is the result of the operating system's failure to bring the desired
page into memory. The procedure for handling this page fault is straightforward (Figure 9.6):
1. We check an internal table for this
process to determine whether the
reference was a valid or an invalid
memory access.
2. If the reference was invalid, we
terminate the process. If it was valid,
but we have not yet brought in that
page, we now page it in.
3. We find a free frame (by taking one
from the free-frame list, for
example).
4. We schedule a disk operation to read
the desired page into the newly
allocated frame.
5. When the disk read is complete, we
modify the internal table kept with
the process and the page table to
indicate that the page is now in
memory.
6. We restart the instruction that was
interrupted by the trap. The process
can now access the page as though it
had always been in memory.
Page Replacement Algorithms:
Page replacement takes the following approach. If no frame is free, we find one that is not
currently being used and free it. We can now use the freed frame to hold the page for which
the process faulted. (Figure 9.10)

1. Find the location of the desired page on the disk.


2. Find a free frame:
a. If there is a free frame, use it.
b. If there is no free frame, use a page-replacement algorithm to select a victim frame.
c. Write the victim frame to the disk; change the page and frame tables accordingly.
3. Read the desired page into the newly freed frame; change the page and frame tables.
4. Restart the user process.

We next illustrate several page-replacement algorithms. In doing so, we use the reference

string for a memory with three frames.


FIFO Page Replacement Algorithm:
The simplest page-replacement algorithm is a first-in, first-out (FIFO) algorithm. A
FIFO replacement algorithm associates with each page the time when that page was brought
into memory. When a page must be replaced, the oldest page is chosen. We can create a
FIFO queue to hold all pages in memory. We replace the page at the head of the queue. When
a page is brought into memory, we insert it at the tail of the queue.
For our example reference string, our three frames are initially empty. The first three
references (7, 0, 1) cause page faults and are brought into these empty frames. The next
reference (2) replaces page 7, because page 7 was brought in first. Since 0 is the next
reference and 0 is already in memory, we have no fault for this reference. The first reference
to 3 results in replacement of page 0, since it is now first in line. This process continues as
shown in Figure
9.12. Every time a fault occurs, we show which pages are in our three frames. There are
fifteen faults altogether.

FIFO page replacement algorithm is easy to understand and to code for the
programmers, however its performance is not always good. To illustrate the problems that are
possible with a FIFO page-replacement algorithm, we consider the following reference string:

1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5
Notice that the number of faults for four frames (ten) is greater than the number of
faults for three frames (nine)! This most unexpected result is known as Belady’s Anomoly.
The page-fault rate may increase as the number of allocated frames increases.

Optimal Page Replacement Algorithm:


One result of the discovery of Belady's anomaly was the search for an which has the lowest
page-fault rate of all algorithms and will never suffer from Belady's anomaly. Such an
algorithm does exist and has been called OPT or MIN. It is simply this:

Replace the page that will not be used


for the longest period of time.

Use of this page-replacement algorithm guarantees the lowest possible page fault rate
for a fixed number of frames.

For example, on our sample reference string, the optimal page-replacement algorithm
would yield nine-page faults, as shown in Figure 9.14. The first three references cause faults
that fill the three empty frames. The reference to page 2 replaces page 7, because page 7 will
not be used until reference 18, whereas page 0 will be used at 5, and page 1 at 14.
Unfortunately, the optimal page-replacement algorithm is difficult to implement, because it
requires future
knowledge of the reference string. As a result, the optimal algorithm is used mainly for
comparison studies.

LRU Page Replacement:


If we use the recent past as an approximation of the near future, then we can replace the
page that has not been used for the longest period of time. This approach is the Least-
Recently- Used Algorithm. LRU replacement associates with each page the time of that
page's last use. When a page must be replaced, LRU chooses the page that has not been used
for the longest period of time.

The result of applying LRU replacement to our example reference string is shown in
Figure 9.15. The LRU algorithm produces twelve faults. Notice that the first five faults are
the same as those for optimal replacement. When the reference to page 4 occurs, however,
LRU replacement sees that, of the three frames in memory, page 2 was used least recently.
Thus, the LRU algorithm replaces page 2, not knowing that page 2 is about to be used.

Allocation of Frames:
How do we allocate the fixed amount of free memory among the various processes? If
we have 93 free frames and two processes, how many frames does each process get?

The simplest case is the single-user system. Consider a single-user system with 128
KB of memory composed of pages 1 KB in size. This system has 128 frames. The operating
system may take 35 KB, leaving 93 frames for the user process.
Minimum Number of Frames

Allocation of frames are constrained in various ways. We can not allocate more than
the total available frames. And we must also allocate at least minimum number of frames.
One reason for allocating at least a minimum number of frames involves performance.
Obviously, as the number of frames allocated to each process decreases, the page-fault rate
increases, slowing process execution.

The minimum number of frames is defined by the computer architecture. Consider an


example, IBM 370 MVC instruction. Since the instruction is from storage location to storage
location, it takes 6 bytes and can straddle two pages.

The minimum number of frames per process is defined by the architecture, the
maximum number is defined by the amount of available physical memory. In between, we
are still left with significant choice in frame allocation.

Allocation Algorithms

The easiest way to split m frames among n processes is to give everyone an equal
share, m/n frames. For instance, if there are 93 frames and five processes, each process will
get 18 frames. The three leftover frames can be used as a free-frame buffer pool. This scheme
is called equal allocation.

An alternative is to recognize that various processes will need differing amounts of


memory. Consider a system with a 1-KB frame size. If a small student process of 10 KB and
an interactive database of 127 KB are the only two processes running in a system with 62 free
frames, it does not make much sense to give each process 31 frames. The student process
does not need more than 10 frames, so the other 21 are, strictly speaking, wasted.

To solve this problem, we can use proportional allocation in which we allocate


available memory to each process according to its size. Let the size of the virtual memory for
process pi be si, and define

S=∑si.

Then, if the total number of available frames is m, we allocate ai frames to process pi,
where ai is approximately

ai= si/S x m.

With proportional allocation, we would split 62 frames between two processes, one of 10 pages
and one of 127 pages, by allocating 4 frames and 57 frames, respectively, since

10/137 x 62 ≈ 4, and

127/137 X 62 ≈ 57.

Global versus Local Allocation


Global replacement allows a process to a replacement frame from the set of all
frames, even if that frame is currently allocated to some other process; that is, one process
can take a frame from another. Local replacement requires that each process select from only
its own set of allocated frames.
Thrashing:
If the process does not have the enough number of frames it needs to support pages in
active use, it will quickly page-fault. At this point, it must replace some page. However, since
all its pages are in active use, it must replace a page that will be needed again right away.
Consequently, it quickly faults again, and again, and again, replacing pages that it must back
in immediately.

This high paging activity is called Thrashing. A process is thrashing if it is spending


more time paging than executing.

Cause of Thrashing

The operating system monitors CPU utilization. If CPU utilization is too low, we
increase the degree of multiprogramming by introducing a new process to the system. A
global page-replacement algorithm is used; it replaces pages without regard to the process to
which they belong. Now suppose that a process enters a new phase in its execution and needs
more frames. It starts faulting and taking frames away from other processes. These processes
need those pages, however, and so they also fault, taking frames from other processes. These
faulting processes must use the paging device to swap pages in and out. As they queue up for
the paging device, the ready queue empties. As processes wait for the paging device, CPU
utilization decreases.

The CPU scheduler sees the decreasing CPU utilization and increases the degree of
multiprogramming as a result. The new process tries to get started by taking frames from
running processes, causing more page faults and a longer queue for the paging device. As a
result, CPU utilization drops even further, and the CPU scheduler tries to increase the degree
of multiprogramming even more. Thrashing has occurred, and system throughput plunges.
This phenomenon is illustrated in Figure 9.18, in which CPU utilization is plotted against the
degree of multiprogramming.

Working-Set Model
This model uses a parameter, Δ, to define working set window. The idea is to examine the
most recent Δ page references. The set of pages in the most recent Δ page references is the
working set (Figure 9.20). For example, given the sequence of memory references shown in
Figure 9.20, if Δ = 10 memory references, then the working set at time t1 is {1, 2, 5, 6, 7}. By
time t2, the working set has changed to {3, 4}.
The most important property of the working set, then, is its size. If we compute the
working-set size, WSSi, for each process in the system, we can then consider that

where D is the total demand for frames. Each process is actively using the pages in its
working set. Thus, process i needs WSSi frames. If the total demand is greater than the total
number of available frames (D > m), thrashing will occur, because some processes will not
have enough frames.
Once Δ has been selected, use of the working-set model is simple. The operating
system monitors the working set of each process and allocates to that working set enough
frames to provide it with its working-set size. If there are enough extra frames, another
process can be initiated. If the sum of the working-set sizes increases, exceeding the total
number of available frames, the operating system selects a process to suspend. The process's
pages are written out (swapped), and its frames are reallocated to other processes. The
suspended process can be restarted later.
This working-set strategy prevents thrashing while keeping the degree of multiprogramming
as high as possible. Thus, it optimizes CPU utilization.

Page-Fault Frequency
The specific problem is how to prevent thrashing. Thrashing has a high page-fault rate. Thus,
we want to control the page-fault rate. When it is too high, we know that the process needs
more frames. Conversely, if the page-fault rate is too low, then the process may have too
many frames. We can establish upper and lower bounds on the desired page-fault rate (Figure
9.21). If the actual page-fault rate exceeds the upper limit, we allocate the process another
frame; if the page-fault rate falls below the lower limit, we remove a frame from the process.
Thus, we can directly measure and control the page-fault rate to prevent thrashing.
Unit – 4 –iii part
Mass Storage Structure
Overview Of Mass Storage Structure:
A general overview of the physical structure of secondary and tertiary storage devices.

Magnetic Disks:
Magnetic Disks provide the bulk of secondary storage for modern computer systems.
Conceptually, disks are relatively simple (Figure 12.1). Each disk platter has a flat circular shape, like a
CD. The two surfaces of a platter are covered with a magnetic material. We store information by
recording it magnetically on the platters.

A read -write head "flies" just above each surface of every platter. The heads are attached to a
disk arm that moves all the heads as a unit. The surface of a platter is logically divided into circular
tracks which are subdivided into sectors. The set of tracks that are at one arm position makes up a
cylinder. There may be thousands of concentric cylinders in a disk drive, and each track may contain
hundreds of sectors. The storage capacity of common disk drives is measured in gigabytes. The time to
move the disk arm to the desired cylinder, called the seek time and the time necessary for the desired
sector to rotate to the disk head, called the rotational latency.

Magnetic Tapes
Magnetic tape was used as an early secondary-storage medium. Although it is relatively
permanent and can hold large quantities of data its access time is slow compared with that of main
memory and magnetic disk. In addition, random access to magnetic tape is about a thousand times slower
than random access to magnetic disk, so tapes are not very useful for secondary storage. Tapes are used
mainly for backup, for storage of infrequently used information, and as a medium for transferring
information from one system to another.

A tape is kept in a spool and is wound or rewound past a read-write head. Moving to the correct
spot on a tape can take minutes, but once positioned, tape drives can write data at speeds comparable to
disk drives.
Disk Structure:
Modern disk drives are addressed as large one-dimensional arrays of logical blocks, where the
logical block is the smallest unit of transfer. The size of a logical block is usually 512 bytes. The one-
dimensional array of logical blocks is mapped onto the sectors of the disk sequentially.

Sector 0 is the first sector of the first track on the outermost cylinder. The mapping proceeds in
order through that track, then through the rest of the tracks in that cylinder, and then through the rest of
the cylinders from outermost to innermost. On media that use constant linear velocity, the density of bits
per track is uniform. The density of bits decreases from inner tracks to outer tracks to keep the data rate
constant. This method is used in hard disks and is known as constant angular velocity.

Disk Attachment:
Computers access disk storage in two ways. One way is via I/O ports (or host attached storage).
The other way is via a remote host in a distributed file system (network attached storage).

 Host-Attached Storage
Host-attached storage is storage accessed through local I/0 ports. These ports use several
technologies. The typical desktop PC uses an I/0 bus architecture called IDE or ATA. This architecture
supports a maximum of two drives per I/0 bus. A newer, similar protocol that has simplified cabling is
SATA. High-end workstations and servers generally use more sophisticated I/0 architectures, such as
SCSI and fiber charmel (FC).

SCSI is a bus architecture. Its physical medium is usually a ribbon cable with a large number of
conductors. FC is a high-speed serial architecture that can operate over optical fiber or over a four-
conductor copper cable.

 Network-Attached Storage
A network-attached storage (NAS) device is a special-purpose storage system that is accessed
remotely over a data network (Figure 12.2). Clients access network-attached storage via a remote-
procedure-call interface. The remote procedure calls (RPCs) are carried via TCP or UDP over an IP
network-usually the same local-area network (LAN) that carries all data traffic to the clients.

 Storage-Area Network
One drawback of network-attached storage systems is that the storage I/O operations consume
bandwidth on the data network, thereby increasing the latency of network communication. A storage-area
network (SAN) is a private network (using storage protocols rather than networking protocols) connecting
servers and storage units, as shown in Figure 12.3. The power of a SAN lies in its flexibility. Multiple
hosts and multiple storage arrays can attach to the same SAN, and storage can be dynamically allocated to
hosts. A SAN switch allows or prohibits access between the hosts and the storage.
Disk Scheduling:
One of the responsibilities of the operating system is to use the hardware efficiently.
The access time has two major components: seek time, rotational latency.

 FCFS Scheduling
The simplest form of disk scheduling is, of course, the first-come, first-served
(FCFS) algorithm. This algorithm is intrinsically fair, but it generally does not provide the
fastest service. Consider, for example, a disk queue with requests for I/O to blocks on
cylinders in that order.

98, 183, 37, 122, 14, 124, 65, 67,

Figure 12.4 FCFS Disk Scheduling

If the disk head is initially at cylinder 53, it will first move from 53 to 98, then to
183, 37, 122, 14, 124, 65, and finally to 67, for a total head movement of 640 cylinders. This
schedule is diagrammed in Figure 12.4. The wild swing from 122 to 14 and then back to 124
illustrates the problem with this schedule.
 SSTF Scheduling

The SSTF (Shortest-Seek-Time-First) algorithm selects the request with the least
seek time from the current head position.

Figure 12.5 SSTF Disk Scheduling


For our example request queue, the closest request to the initial head position (53) is
at cylinder 65. Once we are at cylinder 65, the next closest request is at cylinder 67. From
there, the request at cylinder 37 is closer than the one at 98, so 37 is served next. Continuing,
we service the request at cylinder 14, then 98, 122, 124, and finally 183 (Figure 12.5). This
scheduling method results in a total head movement of only 236 cylinders-little more than
one-third of the distance needed for FCFS scheduling of this request queue.

SSTF scheduling is essentially a form of shortest-job-first (SJF) scheduling; and like


SJF scheduling, it may cause starvation of some requests. A continual stream of requests
near one another could cause the request for cylinder 186 to wait indefinitely while serving
cylinder 14.

 SCAN Scheduling
In the SCAN algorithm, the disk arm starts at one end of the disk and moves towards
the other end, servicing requests as it reaches each cylinder, until it gets to the other end of
the disk. At the other end, the direction of head movement is reversed, and servicing
continues. SCAN algorithm is also called as elevator algorithm.
Figure 12.6 SCAN Disk Scheduling

Assuming that the disk arm is moving toward 0 and that the initial head position is
again 53, the head will next service 37 and then 14. At cylinder 0, the arm will reverse and
will move toward the other end of the disk, servicing the requests at 65, 67, 98, 122, 124, and
183 (Figure 12.6). If a request arrives in the queue just in front of the head, it will be
serviced almost immediately; a request arriving just behind the head will have to wait until
the arm moves to the end of the disk, reverses direction, and comes back.

 C-SCAN Scheduling
Circular SCAN Scheduling is a variant of SCAN designed to provide a more uniform wait
time. Like SCAN, C-SCAN moves the head from one end of the disk to the other, servicing
requests along the way. When the head reaches the other end, however, it immediately
returns to the beginning of the disk without servicing any requests on the return trip (Figure
12.7).

Figure 12.7 C-SCAN Disk Scheduling

 LOOK Scheduling
both SCAN and C-SCAN move the disk arm across the full width of the disk. In practice,
neither algorithm is often implemented this way. More commonly, the arm goes only as far
as the final request in each direction. Then, it reverses direction immediately, without going
all the way to the end of the disk. (C- LOOK Scheduling)
Figure 12.8 C-LOOK Disk Scheduling

You might also like