0% found this document useful (0 votes)
19 views96 pages

OS Unit 3

The document discusses memory management techniques used in operating systems. It covers topics like allocation, swapping, fragmentation, compaction, and virtual memory. It also discusses binding of instructions and data to memory at different stages and the role of the memory management unit.

Uploaded by

22bmiit190
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
19 views96 pages

OS Unit 3

The document discusses memory management techniques used in operating systems. It covers topics like allocation, swapping, fragmentation, compaction, and virtual memory. It also discusses binding of instructions and data to memory at different stages and the role of the memory management unit.

Uploaded by

22bmiit190
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 96

9/18/2017

UNIT 3

Memory Management

1
MAIN MEMORY MANAGEMENT
 For the computation of a program,

9/18/2017
requirement is that it should be
resident in main memory to run.

 Motivation : The main motivation


for management of main memory
comes from the fact we need to
provide support for several programs
which are memory resident in main
memory as in a multiprogramming
environment. 2
MAIN MEMORY MANAGEMENT
Issues that prompt main memory

9/18/2017
management :-

 Allocation : processes must be allocated


space in the main memory.

 Swapping, Fragmentation and Compaction :


If a program terminates or is moved out, it
creates a hole in the main memory. The
main memory is fragmented and needs to be
3
compacted for organized allocation.
MAIN MEMORY MANAGEMENT
 Garbage Collection : The area

9/18/2017
released by a process is usually not
accounted for immediately by the
processor – its garbage!!. Compaction
or garbage collection is responsibility
of the OS.

 Protection : checking for illegal


access of data from another process’s
memory area.
4
MAIN MEMORY MANAGEMENT
 VirtualMemory : VM support requires an

9/18/2017
address translation mechanism to map a
logical address to the physical address to
access the desired data or instruction.

 IOSupport : Most block oriented devices


are recognized as specialized files. Their
buffers need to be managed within main
memory alongside other processes.

5
BINDING OF INSTRUCTIONS AND DATA TO MEMORY

 Address binding of instructions and data to


memory addresses can happen at three different

9/18/2017
stages
 Compile time: If memory location known a
priori, absolute code can be generated; must
recompile code if starting location changes
 Load time: Must generate relocatable code if
memory location is not known at compile time
 Execution time: Binding delayed until run
time if the process can be moved during its
execution from one memory segment to another.
Need hardware support for address maps (e.g.,
base and limit registers). 6
MULTISTEP PROCESSING OF A USER PROGRAM

9/18/2017
7
LOGICAL VS. PHYSICAL ADDRESS SPACE
 The concept of a logical address space that is
bound to a separate physical address space is

9/18/2017
central to proper memory management
 Logical address – generated by the CPU;
also referred to as virtual address
 Physical address – address seen by the
memory unit
 Logical and physical addresses are the same in
compile-time and load-time address-binding
schemes; logical (virtual) and physical addresses
differ in execution-time address-binding scheme

8
MEMORY-MANAGEMENT UNIT (MMU)

9/18/2017
 Hardware device that maps virtual to physical
address

 In MMU scheme, the value in the relocation


register is added to every address generated by a
user process at the time it is sent to memory

 The user program deals with logical addresses; it


never sees the real physical addresses

9
COMPILER GENERATED BINDINGS

9/18/2017
 The advantage of relocation can be seen in the
light of binding of addresses to variables in a
program.

 For a variable x in a program P, a fixed address


allocation for x will mean that P can be run only
when x is allocated the same memory again

10
COMPILER GENERATED BINDINGS
 The fixed address allocated by the compiler for the
variable x is known as binding.

9/18/2017
 If x is bound to fixed location then we can execute a
Program P , only when x would be put in its allocated
memory location.

 Otherwise all the address reference to x would be


incorrect.

 If, however, variable can be assigned location relative


to assumed origin, then relocating the program’s
origin anywhere in the main memory, we will still be
able to generate proper relative reference address for
x, & execute the program.

 Compilers generate re-locatable code. 11


LINKING AND LOADING CONCEPTS
 Following the creation of a high level language (HLL) source
program, there are three stages of processing before we can get a
process as shown in figure below.

9/18/2017
12
LINKING AND LOADING CONCEPTS
 Stage1: In the first stage the HLL source program is
compiled and an object code is produced. Technically,
depending upon the program, this object code may by

9/18/2017
itself be sufficient to generate a relocatable process.
However, many programs are compiled in parts, so
this object code may have to link up with other object
modules. At this stage the compiler may also insert
stub at points where run time library modules may be
linked.

 Stage2: All object modules which have sufficient


linking information (generated by the compiler) for
static linking are taken up for linking. The linking
editor generates a relocatable code. At this stage,
however, we still do not replace the stubs placed by
compilers for a run time library link up. 13
LINKING AND LOADING CONCEPTS
 Stage3 : The final step is to arrange

9/18/2017
to make substitution for the stubs
with run time library code which is a
relocatable code.

 When all the three stages are


completed we have an executable.
When this executable is resident in
the main memory it is a runnable
process. 14
MEMORY RELOCATION CONCEPT
Why do we need relocatable Processes ?
 Consider a linear map ( 1-D view ) of main

9/18/2017
memory.
 Program Counter is set to the absolute address of
the first instruction of the program.
 Data can also fetched if we know its absolute
address.
 In case that part of memory is currently in use
then this program can not be run.
 Note if we have program instructions then we
should be able to execute the program from
starting at any location.
15
MEMORY RELOCATION CONCEPT

9/18/2017
16
DYNAMIC RELOCATION USING A RELOCATION
REGISTER

9/18/2017
17
MEMORY RELOCATION CONCEPT

9/18/2017
 With this flexibility, we can allocate any area in
the memory to load this process.

 Note that this is most useful when processes


move in and out of main memory – recall that a
hole created by a process at the time of moving
out of the main memory will not be available
when it is brought into main memory again.

18
DYNAMIC LOADING

9/18/2017
 Routine is not loaded until it is called
 Better memory-space utilization; unused routine
is never loaded
 Useful when large amounts of code are needed to
handle infrequently occurring cases
 No special support from the operating system is
required implemented through program design

19
DYNAMIC LINKING
 Linking postponed until execution time
 Small piece of code, stub, used to locate the

9/18/2017
appropriate memory-resident library routine
 Stub replaces itself with the address of the
routine, and executes the routine
 Operating system needed to check if routine is in
processes’ memory address
 Dynamic linking is particularly useful for
libraries

20
SWAPPING
 A process can be swapped temporarily out of memory
to a backing store, and then brought back into
memory for continued execution

9/18/2017
 Backing store – fast disk large enough to
accommodate copies of all memory images for all
users; must provide direct access to these memory
images
 Roll out, roll in – swapping variant used for
priority-based scheduling algorithms; lower-priority
process is swapped out so higher-priority process can
be loaded and executed
 Major part of swap time is transfer time; total
transfer time is directly proportional to the amount of
memory swapped
 Modified versions of swapping are found on many
systems (i.e., UNIX, Linux, and Windows) 21
SCHEMATIC VIEW OF SWAPPING

9/18/2017
22
CONTIGUOUS ALLOCATION
 Main memory usually into two partitions:
 Resident operating system, usually held in low

9/18/2017
memory with interrupt vector
 User processes then held in high memory

 Single-partition allocation
 Relocation-register scheme used to protect
user processes from each other, and from
changing operating-system code and data
 Relocation register contains value of smallest
physical address; limit register contains range
of logical addresses – each logical address
must be less than the limit register 23
A BASE AND A LIMIT REGISTER DEFINE A LOGICAL
ADDRESS SPACE

9/18/2017
24
HW ADDRESS PROTECTION WITH BASE AND LIMIT
REGISTERS

9/18/2017
25
CONTIGUOUS ALLOCATION
 Multiple-partition allocation
 Hole – block of available memory; holes of

9/18/2017
various size are scattered throughout memory
 When a process arrives, it is allocated memory
from a hole large enough to accommodate it
 Operating system maintains information
about:
a) allocated partitions b) free partitions
(hole)

26
DYNAMIC STORAGE-ALLOCATION PROBLEM
 How to satisfy a request of size n from a list of
free holes

9/18/2017
 First-fit: Allocate the first hole that is big
enough
 Best-fit: Allocate the smallest hole that is big
enough; must search entire list, unless ordered
by size. Produces the smallest leftover hole.
 Worst-fit: Allocate the largest hole; must also
search entire list. Produces the largest leftover
hole.
 First-fit and best-fit better than worst-fit in
terms of speed and storage utilization
27
FRAGMENTATION
 External Fragmentation – total memory space
exists to satisfy a request, but it is not contiguous

9/18/2017
 Internal Fragmentation – allocated memory may
be slightly larger than requested memory; this size
difference is memory internal to a partition, but not
being used
 Reduce external fragmentation by compaction
 Shuffle memory contents to place all free memory
together in one large block
 Compaction is possible only if relocation is
dynamic, and is done at execution time
 I/O problem
 Latch job in memory while it is involved in I/O

 Do I/O only into OS buffers 28


THE FIRST FIT POLICY OF MEMORY ALLOCATION

 We are following FCFS (process management)

9/18/2017
and First Fit (memory allocation) policies.

 First Fit main memory allocation policy is very


easy to implement and is fast in execution.

 First Fit policy may leave many small holes.

29
MEMORY ALLOCATION POLICIES
 Best Fit Policy scans all available holes and chooses the
one with a size closest to the requirement.

9/18/2017
 It requires a scan of the whole memory and is slow.

 Next Fit has a search pointer continues from where the


previous search ended.

 Worst Fit method allocates the largest hole.

 First Fit and Next Fit are fastest and are hence preferred
methods.

 Worst Fit is the poorest of all the four methods.

 To compare these policies, we shall examine the effect of 30


using various policies on a given set of data next.
THE GIVEN DATA FOR POLICY COMPARISON
 The given Data:
• Memory available 20 units

9/18/2017
• OS resides in 6 units
• User processes share 14 units.
 The user process data:

31
FCFS POLICY

9/18/2017
 Statement: “Jobs are processed in the order they
arrive”.

32
FCFS MEMORY ALLOCATION

9/18/2017
33
THE FIRST FIT POLICY
 Statement: “Jobs are processed always from one
end and find the first block of free space which is

9/18/2017
large enough to accommodate the incoming
process”.
 Given Data:

34
THE FIRST FIT POLICY OF MEMORY ALLOCATION

9/18/2017
35
THE BEST FIT POLICY
 Statement: “Jobs are selected after scanning the
main memory for all the available holes and
having information about all the holes in the

9/18/2017
memory, the job which is closest to the size of the
requirement of the process will be processed ”.
 Given Data:

36
THE BEST FIT POLICY OF MEMORY ALLOCATION

9/18/2017
37
FIXED AND VARIABLE PARTITION
 Fixed Partition : Memory is divided into chunks.
For example, 4K/8K/16K Bytes. All of these are

9/18/2017
same size.
 Allocation : If a certain chunk can hold the
program/data, then the allocation. If a chunk can
not hold program/data then multiple chunks are
allocated to accommodate the program/data.

38
FIXED AND VARIABLE PARTITION
 Variable Partition: Memory is divided into
chunks of various sizes. For Example there could

9/18/2017
be chunks of 8K, some may be 16 K or even more.
 Allocation: The program/Data are allocated to
the chunks that can accommodate the incoming
program/Data.

39
BUDDY SYSTEM
 The buddy system of partitioning relies on the fact
that space allocations can be conveniently handled in
sizes of power of 2.

9/18/2017
 There are two ways in which the buddy system
allocates space.
- Suppose we have a hole which is the closest
power of two. In that case, that hole is used for
allocation.
- In case we do not have that situation then we
look for the next power of 2 hole size, split it in two
equal halves and allocate one of these.

 Because we always split the holes in two equal sizes,


40
the two are “buddies”. Hence, the name buddy system.
BUDDY SYSTEM

9/18/2017
41
BUDDY SYSTEM

9/18/2017
We assume that initially we have a space of
1024 K. We also assume that processes arrive
and are allocated following a time sequence as
shown in figure.

In the figure we assume the requirements as


(P1:80K);(P2:312K);(P3:164 K); (P4:38 K).
These processes arrive in the order of their 42

index and P1 and P3 finish at the same time


BUDDY SYSTEM
 With 1024 K or (1M) storage space we split it into
buddies of 512 K, splitting one of them to two 256 K
buddies and so on till we get the right size. Also, we

9/18/2017
assume scan of memory from the beginning. We
always use the first hole which accommodates the
process.

 Otherwise, we split the next sized hole into buddies.


Note that the buddy system begins search for a hole
as if we had a number of holes of variable sizes. In
fact, it turns into a dynamic partitioning scheme if we
do not find the best-fit hole initially.

 The buddy system has the advantage that it


minimizes the internal fragmentation.
43
 But not popular because it is very slow.
PAGING
 Logical address space of a process can be
noncontiguous; process is allocated physical

9/18/2017
memory whenever the latter is available
 Divide physical memory into fixed-sized blocks
called frames (size is power of 2, between 512
bytes and 8192 bytes)
 Divide logical memory into blocks of same size
called pages.
 Keep track of all free frames
 To run a program of size n pages, need to find n
free frames and load program
 Set up a page table to translate logical to physical
addresses
 Internal fragmentation 44
ADDRESS TRANSLATION SCHEME
 Address generated by CPU is divided into:

9/18/2017
 Page number (p) – used as an index into a page
table which contains base address of each page
in physical memory

 Page offset (d) – combined with base address to


define the physical memory address that is
sent to the memory unit

45
ADDRESS TRANSLATION ARCHITECTURE

9/18/2017
46
PAGING EXAMPLE

9/18/2017
47
PAGING EXAMPLE

9/18/2017
48
FREE FRAMES

9/18/2017
49
IMPLEMENTATION OF PAGE TABLE
 Page table is kept in main memory
 Page-table base register (PTBR) points to the

9/18/2017
page table
 Page-table length register (PRLR) indicates size of
the page table
 In this scheme every data/instruction access
requires two memory accesses. One for the page
table and one for the data/instruction.
 The two memory access problem can be solved by
the use of a special fast-lookup hardware cache
called associative memory or translation
look-aside buffers (TLBs)
50
PAGING HARDWARE WITH TLB

9/18/2017
51
MEMORY PROTECTION
 Memory protection implemented by associating
protection bit with each frame

9/18/2017
 Valid-invalid bit attached to each entry in the
page table:
 “valid” indicates that the associated page is in
the process’ logical address space, and is thus
a legal page
 “invalid” indicates that the page is not in the
process’ logical address space

52
VALID (V) OR INVALID (I) BIT IN A PAGE TABLE

9/18/2017
53
CONCEPT OF VIRTUAL STORAGE
 The directly addressable main memory is limited and is quite
small in comparison to the logical addressable space.

9/18/2017
 The actual size of main memory is referred as the physical
memory. The logical addressable space is referred to as
virtual memory.

 The concept of virtual storage is to give an impression of a


large addressable storage space without necessarily having a
large primary memory.

 The basic idea is to offer a seamless extension of primary


memory into the space within the secondary memory. The
address register generate addresses for a space much larger
than the primary memory.

 The notion of virtual memory is a bit of an illusion. The OS 54


supports and makes this illusion possible.
VIRTUAL STORAGE
 The OS creates this illusion by copying chunks of
disk memory into the main memory as shown in

9/18/2017
figure. In other words, the processor is fooled into
believing that it is accessing a large addressable
space. Hence, the name virtual storage space. The
disk area may map to the virtual space
requirements and even beyond.

55
VIRTUAL MEMORY : PAGING
 Once we have addressable segments in the

9/18/2017
secondary memory-we need to bring it within the
main memory for physical access for process.
Often mechanism of paging helps.

 Paging is like reading a book. At any time we do


not need all pages-except the ones we are
reading. The analogy suggest that pages we are
reading are in the main memory and the rest can
be in the secondary memory.

56
VIRTUAL MEMORY : PAGING
 The primary idea is to always keep focus on that area
of memory from which instructions are executed.

9/18/2017
Once that area is identified – it is loaded into the
primary memory into the fixed size pages.

 To enable such a loading, page sizes have to be


defined and observed for both primary as well as
secondary memory.

 Paging support locality of reference for efficient access

 For instance we have location of reference during


execution of while or for loop or a call to a procedure.
57
MAPPING THE PAGES
 Paging stipulates that main memory is
partitioned into frames of sufficiently small sizes.

9/18/2017
 Also, we require that the virtual space is divided
into pages of the same size as the frames.

 This equality facilitates movement of a page from


anywhere in the virtual space (on disks) to a
frame anywhere in the physical memory.

 The capability to map “any page” to “any frame”


gives a lot of flexibility of operation.
58
MAPPING THE PAGES
 Division of main memory into frames is like fixed
partitioning. So keeping the frame size small

9/18/2017
helps to keep the internal fragmentation small.

 Paging supports multi-programming. In general


there can be many processes in main memory,
each with a different number of pages. To that
extent, paging is like dynamic variable
partitioning.

59
PAGING : IMPLEMENTATION

9/18/2017
60
DEMAND PAGING
 Bringa page into memory only when it is
needed

9/18/2017
 Less I/O needed
 Less memory needed
 Faster response
 More users

 Page is needed  reference to it


 invalid reference  abort
 not-in-memory  bring to memory
61
TRANSFER OF A PAGED MEMORY TO CONTIGUOUS DISK SPACE

9/18/2017
62
PAGING: REPLACEMENT
 When a page is no longer needed it can be replaced.
 Consider an example shown in figure process P29
has all its pages present in main memory.

9/18/2017
 Process P6 does not have all its pages in main
memory. If a page is present we record 1 against its
entry. The OS also records if a page has been
referenced to read or to write. In both these cases a
reference is recorded.

63
PAGING: REPLACEMENT
 If a page frame is written into, then a modified bit is
set. In our example, frames 4, 9, 40, 77, 79 have been
referenced and page frames 9 and 13 have been
modified.

9/18/2017
 Sometimes OS may also have some information about
protection using rwe information. If a reference is
made to a certain virtual address and its
corresponding page is not present in main memory,
then we say a page fault has occurred.

 Typically, a page fault is followed by moving in a


page. However, this may require that we move a page
out to create a space for it. Usually this is done by
using an appropriate page replacement policy to
ensure that the throughput of a system does not
suffer. We shall next see how a page replacement
policy can affect performance of a system.
64
PAGE REPLACEMENT POLICIES
 Towards understanding page replacement
policies we shall consider a simple example of a

9/18/2017
process P which gets an allocation of four pages
to execute.
 Further, we assume that the OS collects some
information (depicted in figure) about the use of
these pages as this process progresses in
execution.

65
PAGE REPLACEMENT

9/18/2017
66
PAGE REPLACEMENT POLICIES
 Let us examine the information depicted in figure in some
detail to determine how this may help in evolving a page
replacement policy.

9/18/2017
 Note that we have the following information available about P.

- The time of arrival of each page: We assume that the process


began at some time with value of time unit 100. During its
course of progression we now have pages that have been loaded
at times 112, 117 119, and 120.

- The time of last usage: This indicates when was a certain


page last used. This entirely depends upon which part of
the process P is being executed at any time.

- The frequency of use: We have also maintained the


frequency of use over some fixed interval of time T in the
immediate past. This clearly depends upon the nature of 67
control flow in process P.
PAGE REPLACEMENT POLICIES
 Based on the previous pieces of information if we
now assume that at time unit 135 the process P
experiences a page-fault, what should be done.

9/18/2017
Based on the choice of the policy and the data
collected for P, we shall be able to decide which
page to swap out to bring in a new page.

 FIFO policy: This policy simply removes pages in


the order they arrived in the main memory.
Using this policy we simply remove a page based
on the time of its arrival in the memory. Clearly,
use of this policy would suggest that we swap
page located at 14 as it arrived in the memory
earliest. 68
FIFO PAGE REPLACEMENT

9/18/2017
69
PAGE REPLACEMENT POLICIES
 LRU policy: LRU expands to least recently used. This
policy suggests that we remove a page whose last
usage is farthest from current time. Note that the

9/18/2017
current time is 135 and the least recently used page is
the page located at 23. It was used last at time unit
125 and every other page is more recently used. So
page 23 is the least recently used page and so it
should be swapped if LRU replacement policy is
employed.

 NFU policy: NFU expands to not frequently used. This


policy suggests to use the criterion of the count of
usage of page over the interval T. Note that process P
has not made use of page located at 9. Other pages
have a count of usage like 2, 3 or even 5 times. So the
basic argument is that these pages may still be
needed as compared to the page at 9. So page 9 should 70
be swapped.
LRU PAGE REPLACEMENT

9/18/2017
71
OPTIMAL (NFU) PAGE REPLACEMENT

9/18/2017
72
PAGE HIT AND PAGE MISS

 When we find that a page frame reference is in the


main memory then we have a page hit and when

9/18/2017
page fault occurs we say we have a page miss.

73
PAGE TABLE WHEN SOME PAGES ARE NOT IN MAIN
MEMORY

9/18/2017
74
PAGE FAULT
 If there is a reference to a page, first reference to
that page will trap to operating system:

9/18/2017
page fault
1. Operating system looks at another table to
decide:
 Invalid reference  abort
 Just not in memory
2. Get empty frame
3. Swap page into frame
4. Reset tables
5. Set validation bit = v
6. Restart the instruction that caused the page fault
75
STEPS IN HANDLING PAGE FAULT

9/18/2017
76
PAGING: HW SUPPORT
 Recall the point we need HW within CPU to
support paging. The CPU generates a logical

9/18/2017
address which must get translated to a physical
address. In Figure we indicate the basic address
generation and translation.

77
PAGING: HW SUPPORT
 The sequence of steps in generation of address is
as follows:

9/18/2017
- The process generates a logical address. This
address is interpreted in two parts.

- The first part of the logical address identifies the


virtual page.

- The second part of the logical address gives the


offset within the page.
78
PAGING: HW SUPPORT
 The first part is used as an input to the page table

9/18/2017
to find out the following:

- Is the page in the main memory;


- What is the page frame number for this
virtual page;

 The page frame number is the first part of the


physical memory address.

 The offset is the second part of the correct physical


79
memory location.
PAGING: HW SUPPORT

9/18/2017
 A page fault is generated if the page is not in the
physical memory – trap.

 The trap suspends the regular sequence of


operations and brings the required page from disc
to main memory.

80
SEGMENTATION
 Segmentation also supports virtual memory
concept.

9/18/2017
 One view of segmentation could be that each part
like its code segment, its stack requirements (of
data, nested procedure calls), its different object
modules etc. has a contiguous space. This view is
uni- dimensional.

81
SEGMENTATION
 Memory-management scheme that supports user
view of memory

9/18/2017
 A program is a collection of segments. A segment
is a logical unit such as:
-main program,
-procedure,
-function,
-method,
-object,
-local variables, global variables,
-common block,
-stack,
-symbol table, arrays 82
USER VIEW OF PROGRAM

9/18/2017
83
SEGMENTATION
 Each segment has requirements that vary over
time – stacks grow or shrink, memory

9/18/2017
requirements of object and data segments may
change during the process’s lifetime.
We, therefore, have a two dimensional view of a
process’s memory requirement - each process
segment can acquire a variable amount of space
over time.

84
LOGICAL VIEW OF SEGMENTATION

9/18/2017
85
SEGMENTATION ARCHITECTURE

9/18/2017
 Logical address consists of a two tuple:
<segment-number, offset>,
 Segment table – maps two-dimensional physical
addresses; each table entry has:
 base – contains the starting physical address where
the segments reside in memory
 limit – specifies the length of the segment

 Segment-table base register (STBR) points to the


segment table’s location in memory
 Segment-table length register (STLR) indicates
number of segments used by a program;
86
segment number s is legal if s < STLR
SEGMENTATION ARCHITECTURE

9/18/2017
 Protection. With each entry in segment table
associate:
 validation bit = 0  illegal segment
 read/write/execute privileges

 Protection bits associated with segments; code


sharing occurs at segment level
 Since segments vary in length, memory allocation
is a dynamic storage-allocation problem

87
ADDRESS TRANSLATION ARCHITECTURE

9/18/2017
88
SEGMENTATION EXAMPLE

9/18/2017
89
SHARING OF SEGMENTS

9/18/2017
90
SEGMENTATION
Segmentation : paging with variable page size

9/18/2017
Advantages:
 memory protection added to segment table like
paging
 sharing of memory similar to paging (but per
area rather than per page)
Drawbacks:
 allocation algorithms as for memory partitions

 external fragmentation, back to compaction


problem...
Solution: combine segmentation and paging
91
SEGMENTATION
 Segmentation is similar to paging, except that we

9/18/2017
have a segment table look ups to identify address
values.

 Comparing segmentation and paging :

- Paging offers the simplest mechanism to


effect virtual addressing.

- Paging suffers from internal fragmentation,


segmentation from external fragmentation.
92
SEGMENTATION
 Segmentation affords separate compilation of
each segment with a view to link up later.

9/18/2017
 A user may develop a code segment and share it
amongst many applications.

 In paging, a process address space is linear and


hence, unidimensional. For segmentation each
procedure and data segment has its own virtual
space mapping. Therefore this offers a greater
degree of protection.
93
SEGMENTATION
 In case a program’s address space fluctuates
considerably, paging may result in frequent page

9/18/2017
faults. Segmentation offers no such problems.

 Paging partitions a program and data space


uniformly and hence simpler to manage; difficult
to distinguish data and program space. In
segmentation, space required is partitioned
according to logical division of program
segments.

94
SEGMENTATION AND PAGING
 In practice, there are segments for the code(s),
data and stack.

9/18/2017
 Each segment carries the rwe information as
well.

 Usually, stack and data have read and write


permissions only; code has read and execute
permissions only.

95
SEGMENTATION & PAGING
 A clever scheme with advantages of both would
be segmentation with paging. In such a scheme

9/18/2017
each segment would have a descriptor with its
pages identified. Such a scheme is shown in
figure.

96

You might also like