0% found this document useful (0 votes)
7 views

Memory Management

This is one of the topic from Operating systems

Uploaded by

Priyambada Singh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views

Memory Management

This is one of the topic from Operating systems

Uploaded by

Priyambada Singh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 25

k t l

.•• t '·,'

'
UNIT. Ill I
f,

. . •" 'I • .
•, •u..,f
II • •
, I • ,r; ~ •- l J • - 1 J.p+ t •~
l- I

:
I

?
•.

_,.,,,
.,,,,.-,

Memory Management •
Contents

' 1• I
• Functions of Memory Management
• What Is Main Memory
• Basic Hardware for Memory Management • Address Binding
♦ Static and Dynamic Loading
• Logical Versus Physical Address Space
• Static and Dynamic Linking
► Swapping
► Contiguous Memory Allocation
• Contiguous Memory Allocation Techniques
• Fixed-size Partition Scheme
• Variable-size Partition Scheme
• Strategies Used for Contiguous Memory Allocation Input Queues
• Fragmentation
• External Fragmentation
• Internal Fragmentation
► Paging
♦ Hardware Support
• Basic Method for Paging
♦ Shared Pages
• Protection
• Advantages and Disadvantages of Paging
), Segmentation
• Segmentation Hardware ♦ Fragmentation in Segmentation
• Advantages and Disadvantages of Segmentation
> • •.Differences between Paging and Segmentation
~ tedPaglng
• Advantages and Disadvantages of Segmented l\ 1ging
Questions
-
,
:-Jut: ~ '' '..:iS,::.:;X ~ : ·l· a--ea Lt' ~
IIINO+HtiiMII ·
, 1
The term Memory can be defined as a collection of data in a specific format. It is used t
. st . f O StQ
m ructions and processed data. The memory comprises a large array or group O words Or b1tt
r~
. . .
each with its own location. The primary motive of a computer system 1s to execute prograrn es,
. . h . . s. The,
programs, along ~1th the information they access,_should be mt e mam memory dunng executi~ie
The CPU fetches mstructions from memory accordmg to the value of the program counter. n.
The main purpose of a computer system is to execute programs. These programs, togethe .
. must be at least partially in main memory d urmg
th e data they access, · To improvet Wit\
· execution.
the utilization of the CPU and the speed of its response to users, a general-pu~ose computer th
keep several processes in memory. Many memory-management schemes exist, reflecting .list
0
!
. th . . . vanoUs
approaches, and the effectiveness of each algonthm depends on e situation. Selection of a·m
. II h h. ernory
management scheme for a system depends on many factors, espec1a Y on t e ardware design of ·
the
system. Most algorithms require hardware support.
To achieve a degree of multiprogramming and proper utilization of memory, memory mana e
1s ·
· important Many memory management methods exts .
. t , re fl ect·mg various g lllent
approaches, and
effectiveness of each algorithm depends on the·situation. tile
In this chapter, we discuss various ways to manage memory. The memory management algorith
vary from a primitive bare-machine approach to paging an~ segmentation strategies. Each approa:
has its own advantages and disadvantages. Selection of a memory-management method for aspecific
,system depends on many factors, especially on the hardware design of the system. As we shall
see, many algorithms require hardware support, leading many systems to have closely integrated
hardware and operating-system memory management.
a.111r..;~;»~
~ ~
.~~.;~,-~-~>
~ ;,:~c-=.,... ...... :...:.!..._,~l...11.:.•;:~_..._...::..~:.j

The main memory is central to the operation


Registers
of a modern computer. Main Memory is a
large array of words or bytes, ranging in size
from hundreds of thousands to billions. Main Cache
· memory is a repository of rapidly available
information shared by - the CPU and 1/0 Main Memory
devices. Main . me~ory ls the place _where
programs and _information are kept when the
_proces~r is effectively utilizing them. .Main Electronic disk
-· -· : memory Js as~lat:e.d with the processor, so
. moving instructions and information into and Magnetic disk
_ _. ,- . · : __out of the p~ r ls extremely fast Main
· , :, :_<·_:·- ~ ~-·-~ ·known as _RAM : (Ran.dom Optical disk
0::> . f._A~s _M.~~oryJ.. This memory ls it volatile
_-/.. .\ ..memo!t ~- •~t~ ~ ~ when, a··power
_. .: ._,, .. , interruption occurs . ., . __ . - Magnetic tapes

«\[i\i;itl{;};};iiti::;,J/t}'.{}:· './ :~:, =. . . '. , ·•.. Figure 6.1 Memocy Hi•~rchY


• • .-,-·l;"tr?-:"~"' • ..... ·'· -,.. ,. ,.,,. ,.. 1' • , "· U - · - ......_,
J".'.~

f
I
r
~~:;::;:o::::~::::n:::~::k :r
free
or allocated. It address
.
es primary me mory bY prov1dm . .
g abstractions so that software
the srat us of memory locations, whe ilier
it Is

I perceives a large memory 1s allocated to it.


f • s
M mor y managerpermitscomputerswithasmaII amounto mammemorytoexecuteprogram b k
• e . m,o
It does th'1s by movmg .
. , rmat1on ac
larger than the size or amount of available memon1 ·J•
g.
ndary memory by using the concept of swappin

and forth between primary memory and seco


ecting the memory allocated to each process
• The memory manager is responsible for prot ibit
th this is not ensured, then the system may exh
from being corrupte? by ano er process. If
unpredictable behav10ur.
memory space between processes.
• Memory managers should enable sharing of
ory location although at different times.
Thus, two programs can reside at the same mem
cate and
ral roles in a computer system it should Allo
To sum it up Memory management plays seve and
locat e mem ory before and afte r proc ess exec ution; Allocate and de-allocate memory before
de-al ory; To
tation issues; Proper utilization of main mem
after process execution; To minimize fragmen
ess.
maintain data integrity while executing of proc

...r -13 .a~ ~H~. 1~rd


tr4,.
~~~ or ~
. i~~~;; Ma~~g~~;;;.~~-)
lillllla...!~~~J ~'<f.o.' ,.,•,,..·,1,~...,..;....- i-Ji. ........._ .,£~ .,.. _

can
.;~

essor itself are the only storage that the CPU


Main memory and the registers built into the proc none
that take memory addresses as arguments, but
access directly. There are machine instructions the
ions in execution, and any data being used by
that take disk addresses. Therefore, any instruct ory,
ction s, mus t be in one of thes e direct-a cces s storage devices. If the data are not in mem
instru CPU
rate on them. Registers that are built into the
they must be moved there before the CPU can ope and
CPU clock. Most CPUs can decode instructions
are generally accessible within one cycle of the k tick
s at the rate of one or more operations per cloc
perform simple operations on register content ory bus.
ch is accessed via a transaction on the mem
The same cannot be said of main memory, whi essor
pleting a mem ory acce ss may take man y cycles of the CPU clock. In such cases, the proc
Com that
the data required to complete the instruction
~onnally needs to stall, since it does not have accesses. The
it is executing. This situation is intolera
ble because of the frequency of memory
for fast
and main memory, typically on the CPU chip
remedy is to add fast memory between the CPU f rom one ano er. Th·1s
th
s On I · t add ition ally prot ect user proc esse s
·acces · .mu t1user systems, we mus , 11 m· t
ervene
Protection must be provided by the hardware because the operating system doesn t usua YH
b · 1 ) · ardware
etween th e CPU· and · its memory accesses (because of the resulting performance pena ty
i h h' r. Here, we
rnptements th.15 producti.on in several different ways as we show throughout t e c apte h . t
outt· ,
ess as a sepa ra e
tne one -.I • •
. We first need to make sure that each proc
rne Pos s ible imp lem enta tion f h 0 ther and is
,spac e s · · ·
lllory . : eparate per-process memory spac · e prot ects the proc esse s rom eac .
fund· . . con curr ent exec utwn.
arnentaI load ed in mem ory for
· to having multiple processes

j

To separate memory spac
es, we ne ed the ability 0
to determine th e rang
e of legal ad dr es se s operating
th at th e process may system
access an d to en su re
th at th e process can ac 256000
cess only th es e legal
addresses. We can prov
ide this protection by process
using two registers, usua
lly a ba se an d a limit,
as illustrated in Figure 300040
6.2. Th e ba se re gi ste r
holds th e smallest leg
al physical m em or y
address; th e limit registe process
r specifies
th e size of th e range. Fo 420940
r example, if th e ba se
register holds. 300040
an d th e limit register is
120900, th en th e prog process limit
ram can legally access
all addresses from 30 880000
0040 through 42 09 39
(inclusive). Protection
of memory space is
accomplished by havin 1024000
g th e CPU ha rd wa re
compare every ad dr es
s generated in us er
mode wi th th e registers Figure 6.2 : A Base and
. a Limit Register Define a
t Any att em pt by a prog
users' memory results
ram executing in us er
mo de to
Lo gical Address Space.
access operating-syste
in a tra p to th e op er ati m memory or o~
(Figure 6.3). This scheme ng sy ste m, which tre at s th e attem
prevents a us er pr og ra pt as a fatal error
code or data structures m from (accidentally or
of either th e op er ati ng de lib era tely) modifyingtli
be loaded only by th e op system or ot he r us er s.
erating system, which us Th e ba se an d limit registerscai
instructions can be exec es a special privileged
uted only in ke rn el mo in str uc tio n. Since privileged .
kernel mode, only th e op de, an d since only th e
erating system can lo ad op erating system executeSiD
th e ba se an d limit regi
sters. · ·

\ ' ba se + limit

I address yes
> -- -- -! ~ yes

no

trap to operating system


. monitor -addressing er memory
ro r
. ·.. ·. _ Figure 6.3 : Hardware
•1 • ,· ••

Address Prot
ection wi th Base and Li
· ,·. · · scheme.allows th e op~~ tin
Th is mit Regi sters
g system~ change th use'
,:. programs fro!Il :changing e value of th e register
s but preve;~rnodt·
_~e regis~rs• contents.'
~- - • ,!, '
. . ..
The op er ating system, exec uting in ker
·'·· • •. : n .
----t,..:.;.:... J~.w.&i7~qEJ:r~~s··~.-.. -.:,,--,---~~ ......_~
._..~~...,..+k•~-.. ....~
...,.---,- ,-,.

~given unrrstrlclrd access lo hnth opcraUng-systcm memo ry and us,:rs' m<:mory. Thi!.; r,rovi •,i<m
I· iin•ratlng· system to load users . , memory, to dump c,ut th,, ~;e progr ;;im 5
- , , 1n1,,o users
' progr.inis
n ws t 1,e ( , . . .
11 0 perform 1/0 to and from u,;,•r
r:1se of rrrors, to au:css and modify parameters of system mils, to •
in ('1llory, anlI to J)rovid, c many .other scrvlc cs. , · c.ons Jt1er; or example, that an <,pc rati ng system for
·
1
111 st lhc state of one process from the
a multiprorcsslng sySlcm mu execute context switches, storing
xt from main memory into the
rrgi~tt•rs Into main memory before loading lhe next process's conte
~gistl'.'rS.

- ~~-~e~~ .~i~~i~~ ·~)


ted, the program must be
usually, a program resides on a disk as a binary executable file. To be execu
the memory management in use,
brought into memory and placed within a process. Depending on
tion. The processes on the disk
the process may be moved between disk and memory during its execu
input queue. The normal single-
that are waiting to be brought into memory for execution form the
and to load that process into
tasking procedure is to select one of the processes in the input queue
from memory. Eventually, the
memory. As the process is executed, it accesses instructions and data
systems allow a user process
process terminates, and its memory space is declared available. Most
ss space of the computer may
to reside in any part of the physical memory. Thus, although the addre
. You will see later how a user
start at 00000, the first address of the user process need not be 00000
a user program goes through
program actually places a process in physical memory. In most cases,
ted (Figure 6.4) . Addresses may
several steps -som e of which may be optio nal-b efore being execu
the source program are generally
be represented in different ways during these steps. Addresses in
these symbolic addresses to
symbolic (such as the variable count). A compiler typically binds
module"). The linkage editor or
relocatable addresses (such as "14 bytes from the beginning of this
(such as 74014). Each binding is
loader in turn binds the relocatable addresses to absolute addresses
a mapping from one address space to another.
can be done at any step along
Classically, the binding of instructions and data to memory addresses
the way:
reside in memory, then
'. • Compile Time: If you know at compile time where the process will
a user process will reside
, absolute code can be generated. For example, if you know that
at that location and extend
starting at location R, then the generated compiler code will start
up from there. If, at some later time, the starting location chang
es, then it will be necessary to
at compile time.
recompile this code. The MS-DOS.COM-format programs are bound
•' I

reside in memory, th en
: ' '

· • Load Time: If it is not known at compile time where the process will
g is d~layed until loa_d
the compiler must generate relocatable code. In this case, final bindin
to incorporate this
time. If the starting address changes, we need only reload the user code
changed value. , ,
from one memory _segme nt
~ Execution Time: If the process can be moved during its execution
· • to another, then binding must be delayed until run time. Specia
l hard~are must be availa ble for
d.
,; ,this scheme to work. Most general-purpose operating systems use this metho
compiler or compile
assembler } time

linkage
editor

load
time

loader

in-memory
ynamic binary execution
linking memory
image
}

Figure 6.4 : Multistep Processing of a User Program


time (run time)

!
"'1'1'"."\l<i:"
( ,~
. . . .. .. ..
-~ ~

An address generated by the CPU is commonly referred to as a logical address, wherea~ ddr!lt
ana f ~
seen by the memory unit-:- that is,· the one loaded into the memory-address 1st
re~ erd~
memory-is c::ommonly referred to as a physical address. The compile-time and load-tim
ea '
.. binding methods generate identical logical and physical addr~sses.
· ·· ·
.,J
. ·. · · · I and phr"":
Hc;,wever, _the ·_ execution~~~e _a~dr~ss _binding scheme results in ~iffering logica
We uselo~
_address_es. In ~is c~se, \Ve _usually refer to the logical address as a v1rt~al addres
:~s gener3tedf
.. address ~nd virtual_address interchangeably in this text. The set of all logical ad<lr~s
these lotji
. ·a program is a logical address space. The set of all physical addresses correspo nd10
gto
· addres;es is a 'physical address space: . . . .
. ..,'. .· >. .-: . .. •.- . . .
- . .·.--· -~ ~l i1t ''¥ tk ll: :f·-:-::-~~~,---~ ---JJIIIIIIIW~-.. - "~
• ~.....r..t...,i,~-""'"" '•~.....:.~ ..... .it.,..-...c-. .. ,.,

I . rr

tlon-t1111c address-binding schl'l lll'., t lw' l<u•k ii 'lri<l i,tiy"., 'ic·• ,I .,(I < ft'SS s p .1n•11 (1I ,i.•r.
f11115. ill t I ~
11:
c"ccu
"
r, ' ' •

done hy a h,1rdw,1rc <lcvk<' c.1lh l thr


run-time mapping from virtual to physlcnl addresses Is
different nwthod.~to ,1ccomplish ,1,ch
::ory-managcmcnt unit (MMU). We can choose from many
n,apping.

relocation
register
logical 1140001 physical
address address
CPU -
, + -
p
memory
346 1434 6

MMU

ster
Figure 6.S : Dynamic Relocation using a Relocation Regi

Static loading is a process of loading the entire program in to


a fixed address. It requires more me mo ry
space.
It Is observed In our previous discussions that it is nece
ssary for the entire program and all d:1ta of
1 process to be in physical memory for the process to execute. The size
of a process h.1s thus been
pace utilization, we can use dynamic
limited to the size ofphysical memory. To obtain better memory-s
. All routines are krpt on <lisk in
loading. With dynamic loading, a routine is not loaded until it is called
ory and is executed. When J routin~
arelocatable load format The main program is loaded into mem
see whethrr the other routine hJs
Deeds t.o call another routine, the calling routine first checks to
the desired routine into
been load~. If ft has not, the relocatable linking loader is called to load Then control is pJsst>d tu
change.
~ and to update the program's address tables to reflect this
_ lftlly loaded routine.
::::::-..rdynamic loading Is that a routine is loaded only when it is need«I. Th
- • . .lllefld when large amounts of code are needed to h~nd
Is m<thoJ Is
le Infrequently Ol( ur ~mg cJsrs:
stze may be l.~r.:t>,_th~_p1.>r_t10~ thJt b
~ . . . , . . ~ ln_thJs ~se, although the total program
,-i N. JW ~) _may be m·uch smaller. Dynamic loading does not 1t•q~11re spt.•~1.,1 support
~ It 11 the responsibility of the users to design
their progr.,ms to _t~ke
by prov1d111g
i ~ Operating systems may help the progrJmmer, however,
~ l e loading. .
~
-...
~ --·I
.
·•, l
Jo.;,·
.·:'.:4i~;.-:-~;L~ · ::~,· !!MA•: i~ .· .• \!
.. ..--- . .
-,• • -h
__.
f - St.at.ic and Dynamic L_inking ...... ~
• .
r
To periorm
a link ing task a linker 1s used. A linker 1s a program tha
. . . t takes one or mor h·
1·enera t ed by a com piler and combines them mto .
a smgle executable flle. e o Jtct f·
,~
:i • Static Linking: In static linking, the link
er combines all necessary program mod
single executable program. So t here 1s . no ul .
run f1me dependency. Some operatin es s iri111~
support only static linking, in which system
language libraries are treated like any ot~
module . yi,~rr.i
er obi~r.t
• Dynamic Linking: The basic concept
of dynamic linking is similar to dynami
dyn amic linking, "Stub" is include~ for eac c load in
h ap?ropriate library routine reference. A
small piece of code. When the stub 1s execute stubg.~:
d, 1t checks whether the needed routine is
in memory or not. If not available, then the al
program loads the routine into memory. ready

A process needs to be in memory to be


executed. A process, however, can be swapped temporarily
out of memory to a backing store, and the
n brought back into memory for continu
example, assume a multiprogramming env ed execution, For
ironment with a round robin CPU-schedu
When a quantum expires, the memory man ling algorithm,
ager will start to swap out the process that
and to swap in another process to the mem just finished.
ory space that has been freed (as shown
the meantime, the CPU scheduler will allo in Figure 6.6). ln
cate a time slice to some other process in
each process finishes its quantum, it will memory. When
be swapped with another process. Idea
manager can swap processes fast enough lly, the memory
that some processes will be in memory,
when the CPU scheduler wants to resched read y to executt,
ule the CPU. The quantum must also be
that reasonable amounts of computing are suff icien tly. ,
done between swaps. A variant of this swa
used for priority-based scheduling algorith ppin g policy'
ms. If a higher priority process arrives and
the memory manager can swap out the wan ts se ·
lower-priority process so that it can load
higher-priority process. When the higher and execute
-priority process finishes, the lower-prior
·swapped back in and continued. This var ity process can
iant of swapping is sometimes called roll
out, roll in.
Normally a process that ls swapped out
will be swapped back into the same mem
it ~ ed previously. This restriction ory spa
ls dictated by the method of address bind
.dcKle at ~mbly or load time, then the process cannot ing. If bin
... . . ·.- .-binding Is being used, then a process can be mov ed to different locations. If ex
be swapped Into a different memory spac
·· P,hyllcal addresses are computed during executi e,
on time. Swapping requires a backing st
.JIIIICDllltDre II amunonly a _fast disk.
It must be large enough to accommodate cop
.11N1'1. ~ It must provld• direct iccess ies of all
,:_: 1.
to these memory images.
··:•i,, ~·;:·-,'.. -~. . .. •. .'. _::
•• '• ~-· .•. , ..... -••
't

~lstlna
·. . ' •

~ of a11 ·p ~es ses whose memory Images ae


Wheniver the CPU scheduler deddts to
.to iee whether the next process in the
~ -~ dljpatcher swaps out a proce~r
r,1oads .,..isten as normal and trans
· · · · · pplng.system Is fairly high.
~ 129g 0 ~-PS- :lfi· W! ·M·i & 3· -: .
••::-. . "' '· -~;t:l- "iMitt dO r.1¢iji@itj-•-m., f~A
«.
' ·• "' •• · ) L · "'

To g~t an idea of the context switch time, let us assume th • ''"


backing store is a standard hard disk w'th at the user process is of size 1MB and the
of the 1 MB process to or from memory t k
· ·
I a transfer rate f 5 MB
a es 1000 KB/5000 K
°
per second. The actual transfer
nulhseconds. Assuming that no head s ks B per second = 1 /5 second = 200
. ee are necessary a d
the swap time takes 208 milliseconds si· n an average latency of 8 milliseconds
· nee we must both '
is then about 416 milliseconds. swap out and swap in, the total swap time

Opera~ing ·· ·
system .,_,

(D Process
___ ______:_S..:. .:w..=a!:.p. :o=u.:. t--l--l-~p 1
1
Process
Swap in P2
User
.. .
space

Main memory Backing store

Figure 6.6 : Swapping of Two Processes using a Disk as a Backing Store


For efficient CPU utilization, we want our execution time for each process to be long relative to the
swap time. Thus, in a round-robin CPU-scheduling algorithm, for example, the time quantum should
be substantially larger than 0.416 seconds. Notice that the major part of the swap time is transfer
time. The total transfer time is directly proportional to the amount of memory swapped. If we have
a computer system with 128 MB of main memory and a resident operating system taking 5 MB, the
maximum size of the user process is 123 MB. However, many user processes may be much smaller
th an this size-say, 1 MB. A 1 MB process could be swapped out in 208 milliseconds, compared to the
24-6 seconds for swapping 123 MB. Therefore, it would be useful to know exactly how much memory
~USer process is using, not simply how much it might be using. Then, we would need to swap only
at ts actually used, reducing swap time. For this method to be effective, the user must keep the
system · ,
re . tn,ormed of any changes in memory requirements. Thus, a process wi'th dynam~c · m~mory
th
~uirements \Viii need to issue system calls (request mem~ry and release memory)_to mform •
Ji
0 w~bng system of its changing
memory needs. Swapping 1s constramed by other factors as_ well.
_want to swap a process, we must be sure that it is completely idle. Of particular concern is any
Pending 1/0 A t t swap that process to
fr
. h
· process may be waiting for an 1/0 operat10n w en we wan °
thee up its memory. However. if the 1/0 is asynchronously accessing the user memory for 1/0 buffe_rs,
en the p ' . db cause the device
was b.usy. rocess cannot be swapped. Assume that the 1/0 operation was queue e . . h
Th . · · . p the 1/0 operation m1g t
· · e!l, if we were to swap out process P1 and swap m process 2'
· then attempt . bl m
· solutions to this pro e
, to use memory that now belongs to process P2• The two main
- .· -
- - 1
L -...
.. ,. •-. ,, _. ·.;: ££E· ._: ·; k-> ··-·•: te

a;·~s~ tu;,i•·~
are neve. r to -sw.lp a pr oc es s wi th pe nd ing l/ 0,
-.--2.~· -:13.t-·o= 1-2: -~
... ~ \ : -
system buffers. Tr an sfe rs be ~,· . or to ex ec ute l/ 0 op era
ee n op er atm g_-s ys tem
tio ns only .
into
wh en the pr oc es s is sw ap
fu rth er expla na tio n. We
pe d m. Th e as su mp tio n
. us · th ' ·
bu tle r_s an d p~ ce ss rn
th at sw ap pm g re qm re s
~rnory then O
few, if any. he ad 0 ur OIi..
;;r, _
;

po stp on e disc sm g ts iss ue un ti·1 w h


co,-ered. Generally, sw ap _sp .er e se co nd ary -storage see ksn..-:Z ·
ace is all oc ate d as a ch st ~ :,
its us e is as fast as po ssi un k of_ dis ~ se pa r~ te from the file
ble . Cu rre ntl y, sta nd ar sy ste :~~- :
mu ch sw ap pin g tim e an d sw ap pin g is us ed m
d pr ov ide s too lit tle ex few sy stems . lt re .50 ~ .
so lut ion Modified ve rsi. ec uti on tim e to be a rea
on s of sw ap pm . so na ble rnern ory -rn quires
. . g, ho we ve r, ar e fo un d tt.ci
sw ap pin g is us ed m . . on ma ny sy ste ms A moct·r anagern.... .
ma ny ve rsi on s of UNIX. · -~..111:
pr oc es se s we re ru nn ing Sw ap p mg ,va s no rm all . . i ication .
y dis ab led , bu t would sta
an d we re us ing a th res
ho ld am ou nt of me mo rt ·r Qf -
ha lte d if th e loa d on th e ry . Swapping would a~any
sy ste m we re red uc ed . __
Ea~ly PCs lac ke d so ph ist ht
ica ted ha rd wa re ( or op
to im ple me nt mo re ad va era tin g sy ste ms th at tak
nc ed me mo ry ma na ge e ad va nta ge of the har~
me nt me tho ds , bu t the :
lar ge pr oc es se s by a mo y we re used to run multiJ
dif ied ve rsi on of sw ap pin ·.
op era tin g sy ste m, wh ich g. A pr im e ex am ple is
su pp or ts co nc ur re nt ex the Microsoft Windows lt
ec uti on of pr oc es se s in ,
is loa de d an d th er e is ins me mo ry. If a new Process ·
uf fic ien t ma in me mo ry _
sy ste m, ho we ve r, do es , an old pr oc es s is sw ap
no t pr ov ide full sw ap pin pe d to disk. This opera~
. wh en it is tim e to pr ee g, be ca us e th e user, ra th
mp t on e pr oc es s for an er tha n the scheduler, decide!
oth er. An ,.
( an d no t ex ec uti ng ) un til y sw ap pe d- ou t pr oc es s rem
th e us er se lec ts th at pr ains swapped out
su ch as W ind ow s NT, tak oc es s to ru n. Follow-on
e ad va nta ge of ad va nc ed Microsoft operating system
MMU fea tur es no w fo un s, _

l ■ ln:i•A·M·iiiii,MiiiiMl\•nll■
d even on PCs.

M em or y is a hu ge co lle
cti on of by tes , an d me mo
ap pli ca tio ns . Th ere ar ry all oc ati on ref ers to all
e ma inl y tw o typ es of ocating space to computer ':'
me mo ry allocation. Co me mo ry all oc ati on: co nti gu ou s and non
nti gu ou s me mo ry all oc -contiguo~ •
tasks. On th e ot he r ha nd ati on all ow s a single me
, no n- co nti gu ou s me mo mo ry sp ac e to complete the ,_
se cti on s at nu me ro us me ry all oc ati on as sig ns th
mo ry loc ati on s. e me tho d to distinct me mory '.
Co nti gu ou s me mo ry all
oc ati on in th e op era tin
is me mo ry all oc ati on ? g sy ste m is a me mo ry
In th e Co nti gu ou s M em all oc ati on technique. But
or y Allocation, ea ch pr what •
co nti gu ou s se cti on of oc es s is co nta ined in a si ~ ._.
memory. ln thi s me mo
to ge th er in on e pla ce wh ry all oc ati on , all the av
ich im pli es th at th e fre ail ab le me mo ry space retna111S ::
he re an d th er e ac ro ss ely av ail ab le me mo ry pa
th e wh ole me mo ry sp ac rti tio ns are not spread ove
e r\
In Co nti gu ou s me mo ry all , {:
oc ati on wh ich is a me mo
re qu es t by th e us er pr ry ma na ge me nt tec hn iqu th
oc es s for th e me mo ry th e, whenever e:: ~ ;.
giv en to th at pr oc es s ac en a single se cti on of the
co rd ing to its req uir em co nti gu ou s memory bl ., ,
en t.
· .. .. .. ::r•·~,_, -;-",:1"~'•'::~':,; ~ ~ ~ ; ;:
. . . . . . . .,~ i;;;·~,;r:;~'-_·:'.·;:~1
iJ.,~,..i!,;,/,;;.a."<,;,,_;:..,..~k'.k.:i
;>,cO
•""'•"a.:;,,.,,-,,;,,,,,_;•,;,~•• ~,."
W he ne ve r a process ha ''-"''"~\w,:° .
s to be all oc ate d sp ac e . • ,,,ernort • .:'
•- all oc ati on tec hn iqu e, in th e me mo ry, fol low
we ha ve to all ot th e pr ing the contigulH'.:siM· '\'lu5 :
oc es s a co nti nu ou s em 0

·allocation can be done jn tw pty block of space to It: ·


o ways: · :, ·
· . 1. Fixed-siie Partition Scheme ·'
2. Va ria bte·: si~~ Pa ~i tio
n· Sc he me
1 1 1 1 1 - - - - - - - - - - - -=~ - -

their adva ntage s and disad vanta ges.


Let us look at both of these sche mes in detail, along with
. :..f. • ., •.•. (" I ·'1,·' ·'<, ''• ·,•',f!"'•,~.·• '.~•,·••• ,_..,,..,, ..... ~. ~~• ,.•t•-~•\·••• •:c:••:-•T~· ~~ • ~• "'.',....~

~ Fixe d-siz e•· ,:..-·....,,._


Part ition Sche me · )
. ..,, °'-\ .. ~""I--·."~ .. ;. _,...._,.., .- .':.•,.,,.,i-.~~..
- - - ;.:,.-~.... .,. :,....,. t:.,~.;·. ·•v,_.,_-•

In this type of cont iguo us mem ory alloc ation


tech niqu e, each proc ess Is allot ted a fixed size
continuous block in the main mem ory. That mean s there will be conti nuou s blocks of fixed size into
a proc ess come s in, it will be allot ted one
which the comp lete mem ory will be divided, and each time
proce ss, each is allot ted a block of the same
of the free blocks. Because irres pecti ve of the size of the
size memory space . This techn ique is also called static
parti tioni ng.

Operating System

5MB · Proc ess 1


Citx:."
)~··:
... :~
/{ 5MB
(Size 3MB)
Proc ess 2
(Size 1MB)
Cst ~. Fixed size ··-· - -·-·· ·-- --·· ····· ·· '
''
lfl~
· Proc ess 3
E~. pa~{ 5MB (Size 4MB )

h[,,
:--, 5MB

5MB
Inpu t Process Queue
Main Memory

Figu re 6. 7 : Fixe d-siz e Part ition Sche me


that have to be allot ted spac e in the mem ory.
1.::r:: In the figure 6.7, we have 3 proc esses in the inpu t queu e
the mem ory has fixed-sized block s. The first
J~:C: As we are following the fixed size parti tion techn ique,
, and the seco nd pr<;>cess, whic h is of size 1 MB,
process, which is of size 3MB is also allot ted a 5MB block
is also allotted a 5MB block, and the 4MB proc ess is
also allot ted a 5MB block. So, the proc ess size
..~- ~
ory block.
-=:. doesn>t matter. Each is allot ted the same fixed-size mem
;(..:,
blocks into whic h the mem ory will be divid ed
.,.; It is clear that in this sche me, the num ber of cont inuo us
rs, and this, in turn, will dicta te how man y
~:·:· will be decided by the amo unt of spac e each block cove
,,.. processes can stay in the main mem ory at once.
lll!la

\Ii) lliii Adva ntag es of Fixed Size Partition
o~" The advantages of a fixed-size parti tion sche me are:
me is simp le to impl emen t. All we have
• Because all of the block s are the same size, this sche
n proc esses to them .
to do now is divid e the mem ory into fixed block s and assig
ory are left, whic h in turn decid es how
• It is easy to keep track of how man y block s of mem
many more proc esses can be given spac e in the memory.
ory, this sche me can be impl emen ted
• As at a time mult iple proc esses can be kept in the mem
in a syste m that need s mult iprog ramm ing.
~ Disacl\'&Dt~ges '~r Fi~ed Si~e Partition . • • ,· 1'
, .

· Though the fixed-size partition scheme has many advantages, it also has some disad · ·'>
. . . vantages; .
• As the size of the blocks ts fixed, we w1l1 not be able to allot space to a proc ·: · .-
. ess that "~ • .
greater size than the block. . '"'la '.
• The size of the blocks decides the degree of multiprogramming, and only that ma11 , •·
can remain in the memory at once as the number of blocks. YProc~ /
• If the size of the block is greater than the size of the process as shown in figure 6 , ;
8
no other choice but to assign the process to this block, but this will lead to much e~' we ~ •·
left behind in the block. This empty space could've been used to accommodate /d~;Pace -:
process. This is called internal fragmentation. Hence, this technique may lead ~ ;ent i
wastage. · · J>ace

· ' Operating System

Process 1 (3 MB) · t
-- ------------------ 5MB
~

~
Internal
fragmentation
l
5MB
fixed size
partitions

t
5MB

Main Memory . •
Figure 6.8 : Depiction. of Internal Fragmentation
. .
.
· · • • r-:"'·!••s,;,•';;'<c~•.l:~:.O:<•,,.,,,.y"(~f\.•~y~.•.•.r.;•"-.,.,..,~.:~c1 -~;,• i•,..-t1:·. • "'1...,~,>.I••,· ••!•. "

l'DM · variabi;.be 'i>ktiti~ -Seb~


..e..,;~~,..:.:..:.:..., ;:-~;.-:-;~~~-~.,;..
•;~N.:K;;-~ u~~•t-.
·:,./~
~...,,;,,..;~e:.:.,.,,,.k...;_Jf>.°i;.;;."t..l

. In · this ·type of contiguous memory , - - - - - - - - - - - - - - - - -- -,


allocatio·n technique, fixed blocks no -,, " · ··
·_· or, pa_r,titi~ns . are .. made in the ~ 111,ti~gSystem
memory. Instead, each process is allotted p
Process 1 -----1►~• · Process ,
.··, a: variable~sized block · depending . upon . 3 MB .
-._ . : its _ requirements. }'hat means, whenever Process p ·
2
·•. a new process wants some space in _the Process 2 -- ► . ,,. ·• 5.MB :
· · . -. :- memory, if ayailable, this amount of space
. •;~.~o:d~~!:-.i:•::·::::ri::~:
.,the pr~s which ~~pies it. _ . ,· .. • L..;.....;__ _ _ _ _ _ __ _ __
Process 3 __ .,~i;~ pl(
_Figure 6.9 : Variable Size Pa r tition
' ..
..

ftgllre above , there are no fixed -size partit ions. Inste
ad, the first proce ss need s 3MB me mory
2 proce sses are allott ed SM Band 8 MB
Ill ctae d hence is allott ed th at much only. Similarly, the other
th As the block s are varia ble-s ized, which is
~...!' very, only that m~ch sp~ce at is _required by them .
mic Parti tioni ng.
::;~ as proce sses arriv e, this schem e 1s also called Dyna

~ga ges of Vari able Size Part ition

proce sses ha~e block s of space_ allott ed to them as per their requi reme nts, there is no
1. As the entat ion. Henc e, there 1s no mem ory wasta ge in this schem
e.
intern al fragm
depe nd upon how many
. The numb er of proce sses that can be in the mem ory at once will
2 y. Henc e, it will be differ ent for
proce sses are in the mem ory and how much space they occup
differ ent cases and will be dyna mic.
ed space .
. As there are no block s that are of fixed size, even a proce ss of big size can be allott
3
yant ages of Vari able Size Part ition
-. ,

advan tages , it also has some


1. Though the varia ble-s ize parti tion schem e has many
disad vanta ges:
tion schem e is diffic ult to imple ment.
2. Because this appro ach is dyna mic, a varia ble-s ize parti
space in the mem ory.
3. It is diffic ult to keep track of proce sses and the rema ining
•-·v,,,')l::,_., . es
ation Inpu- t Queu
I ; ] •; .~f'.~~-~ :~ ~ CQn tiguo us Mem ory,..:.Alloc
• •' t - ~_r~ __.._... ·• _ ,_ .... ...
....,...;=... .,:tr.:;'..e: - -,,~~_..:i, 1.,:-........., :i...V~"<:I
I'

rise a set of frees space s of vario us sizes


In general, as mentioned, the mem ory block s avail able comp
memo ry, the syste m searc hes the
scattered throughout memory. Whe n a proce ss arriv es and need s
frees space s is too large, it is split into
setforafrees spaces that is large enou gh for this proce ss. If the
two parts. One part is alloc ated to the arriv ing proce
ss; the other is retur ned to the set of holes . When
h is then place d back in the set of holes . If
aprocess termi nates , it relea ses its block of mem ory, whic
ed to form one large r free space.
· the newhole is adjac ent to other holes , these adjac ent holes are merg
At., .~ the system may need to check whet her there are proce sses waiti
~ ~ freed and recom bined mem ory could satisf y
ng for mem ory and
the dema nds of any of these waiti ng
alloc ation probl e m,
~ ure is a parti cular insta nce of the gener al dyna mic stora ge
to satisf y a requ est of size n from a list of holes . Ther e
are many solut ions to this
used to selec t a
best- fit, and wors t-fit strate gies are the ones most comm only
of available holes .

strategy in whic h we start from the begin ning and


allot the first hole,
as per the requi rements of the proce ss. The first- fit strate gy ca n also be
we left
wher e we can start our searc h for the first- fit hole from the place

proce ssor alloc ates the neare st avai la ble mem ory
st in execution.
.. .
'

dftDtaae■ of First-Fit Memory Allocatio n

• It wastes a lot of memory. The processor ignores if the size of partition alloc
job is very large as compared to the size of job or not. It just allocates the atect to the
. a result, a lot of memory is wasted and many jobs may not get space in the rnernory
· As
and would have to wait for another job to complete. tnernory,
• ' Best-Fit Memory Allocatio n Techniqu e
. This is a greedy strategy that aims to reduce any memory wasted because of in
fragmenta tion in the case of static partitionin g, and hence we allot that hole to the ternal
· which is the smallest hole that fits the requireme nts of the process. Hence, we need to ; ;;cess,
· the holes according to their sizes and pick the best fit for the process without wasting rn sort
ernory.
of Best-Fit Memory Allocatio n Techniqu e

• Memory Efficient. The operating system allocates the job minimum possible space
in the memory, making memory managem ent very efficient. To save memory from
getting wasted, it is the best method.

Best-Fit Memory Allocatio n Techniqu e

• It is a Slow Process. Checking the whole memory for each job makes the working of
_the '?perating system very slow. It takes a lot of time to complete the work.
• Worst-Fit Memory Allocatio n Techniqu e
· This strategy is the opposite of the Best-Fit strategy. We sort the holes according to their
_sizes and choose the largest hole to be allotted to the incoming process. The idea behind this
_allocation is that as the process is allotted a large hole, it will have a lot of space left behind as
. .internal fragmenta tion. Hence, this will create a hole that will be large enough to accommodate
~ ,: . · ·a few other processes. . .

tion
• .Since this process chooses the largest hole/parti tion, therefore there will be large
· ·. Internal fragmenta tion. Now, this internal fragmenta tion will be quite big so th at
· : . odlersmaD processes cati also be placed in that leftover partition

·
process beaiUse It traverses all the partitions in the me mory a nd _then
1in"
. . . consun o
all the partitions , which is a time·
. . . •orst fit in
rst flt and best fit are better th an ~tis c1earlY
t1on. Neither first fit nor be 5t fl
· lzation, but first fit is general
- •:" - ,.

·~ntatl on
enta~i~~·i·s -~n ~~~a~ted problem in the operating system in which the processes ar~ loaded
-frlfl' 1 ded from memory, and free memory space is fragmented. Processes can't be assigned to
03
and un blocks due to their small size, and the memory blocks stay unused. It is also necessary to
mernol"Y
nd that as programs are loaded and deleted from memory, they generate free space or a
unders ta . m
II bl ocks cannot be allotted to new arriving processes, resu Itmg .
in the memory. These sma
boleffi ·ent memory use. The implications of• the process of fragmentation depend entirely on the .
jpe Cl
ific allocation of storage space schemes m the operation along with the particular fragmentation
5
~s. ln some instances, fragmentation leads to some unused storage capacity. This concept is also
:ucable to the generated unused space in this very situation.
'lbe memory used for the preservation of the data set (like file formats) is very similar to the other
sysrems Oike the FAT file system), irrespective of the amount of fragmentation (it happens from null
to the extreme).
' ~~-.-.~•Al

, ,,u ,('"' of Fragmenta tion


.;:•~..,,-...~.

• The user processes are unloaded and loaded from the main memory. Also, all the processes are
kept in the memory blocks in a system's main memory.
• Various spaces are left after the loading and swapping of processes that other processes can't
load because of their sizes. The main memory is available, but the space isn't sufficient in order
to load other processes since the allocation of the main memory processes is dynamic.
,, • ·--::£<1J

! , /J' ... of Fi agnwntat ion


_.,,, ~

Fragmentation is of two types:


• Internal Fragmentation
• External Fragmentation

ntation . )
. .i.u'... .o11:. . ... '· ~ ~ - - .... , _ ;:.. : ...,,.,~

Willleveramemoryblockgets allocated with a process, and in case the process happens to be smaller
drilithetotal amount of requested memory, a free space is ultimately created in this memory block.
-,.due to this, the memory block's free space is unused. This is what causes internal fragmentat ion.
W1amp1e:
that memory allocation in RAM is done using fixed partitioning (i.e., memory blocks of fixed
2118, 4MB, 4MB, and 8MB are the available sizes. The Operating System uses a part of this

aprocess Pl with a size of 3MB arrives and is given a memory block of 4MB. As a ft'sult,
:J!fret_space In this block is unused and cannot be used to allocate memory to another
JJft...., a~ Internal fragmentation .
.,._l'llll",pace to fragmentation may arise due to the fi xed sizes of the memory blocks. It may
the process via dynamic partitioning. DynJmic partitioning allocates
~ uested by the process. As a result, there is no internal fragmentation.
,•_·. ,.

------ j
;

: ...'.j~;: /, .<:

4MB · Allocating
Memory
Block (4 MB)
-4MB for Process 4MB
pl

Memory

Figure 6'.10 : Internal Fragmentation

whenever a method of dynamic memory allocation happens to allocate


mall amount of unusable memory. The total quantity of the memory
in case there's too much external fragmentation. So, there's enouin
ile~ a request, and it is not contiguous. Thus, it is known as extern~

ntation.
re is
)1okb
s (05)
guous. Assigned {
tation Space iit,..;..;.~- -7 }10 kb

· Assigned {
Space ~ -,;.._.-- } 10 kb

L..-------- ---.[·1[il
1Jlt'll •
Jl
Figure 6.11: External Fra g [ .j~• ( 1 11, t11 ;0
V •
· usly. l It ii
RAM to processes conunu 0 a rt'ju •
· · · ously. As
to processes non-conugu
·be decreased.
shuffle the
solution to the problem of externa l fragmen tation is compac tion. The goaJ is to
Compac tion is not always
oPt . contents so as to place all free memor y togethe r in one large block.
tion cannot
~:~. ho,ve,~r._ If reloca~ on is ~tic ~nd is d~ne at assemb ly or Joad time, compac
on time. If address es are
~done. It is possible only if _relocati~ n ts dynam1 c and is done at executi
~-ated dynamically, relocati on reqmre s only moving the program and data and then changi~ g ~e
address . When compac tion is possible , we must detemu ne its
-mster to reflect the new base .th ll
cti. al . y; a
The simples t comp~ o~ gon . ~ is to move all process es toward one end of memor
t,aSf' ... D.
cost- can be
ho)es move in the other directio n, produa ng one large hole of availab le memory . Tbjs scheme
eq,ensh>e.

■ ■
Paging is a memory manage ment scheme that elimina tes the need for contigu ous allocati
on of physica l
ous. Paging
memory. This scheme permits the physica l address space of a process to be non - con~uu
does not. It also
avoids external fragmen tation and the need for compac tion, wherea s segmen tation
so1ve5 the conside rable problem of fitting memor y chunks of varying sizes onto the backing store.
memor y ,,.,; n
1be main idea behind the paging is to divide each process in the form of pages. The main
also be divided in the form of frames.
~...-.~~..,,...,..,.,,.. .,. . .1- ~-~.. ,.
IIIIIR-!! ~~ ~~gin!'w~~- e

One page of the process is to be stored in one of the frames of the memory . The pages
c..m be stored

at the differen t location s of the memor y but the priority is al\\r.iys to


find the contigu ous frames or
d otherwiSe
IIIE Pages of the process are brough t into the main memor y only when they are require
dley reside in the second ary storage .
by the CPU is
'llehanl ware suppor t for paging is illustra ted in Figure 6.12. Every address genera ted
faille(( into two parts: .

logical
address
physical
address roooo _oooo
'I
----~
l}I
' Illig ·,I____
fllll _ 1111 __;
i
j

II
physical
memory
pagf:tab le

,,·"•--.... 6.12: Hardw are Suppo rt for Paging

J
[a) J'lil~t<..' nu ml wr (p) and ;, pa~
e offst't( I) Tl , .
page tabl e con tai ns the lnsc• "tlc
·l,·cs" .' . f~ . h1c pdge nu mb er Is use , . ~ ·<..:,, <. ,,. .
. '· " · d as an Ind
Wl\h the pag e offset to dC'finc , so eac ex into a pa
lhe 1h .. ·· I Jng •
e In pl I I
1ys ca memory. This haseaddre ge lah
pag ing mn<fol of n1crnory ls sl1 s~ 1 le.'J'h~
ow II l~ls1cd memory address
· · n n · gure 6.13. tha t Is sen t to the me mc s corn bined
,ry Uni t l'he
frame
number
pageO
0
pa ge 1

pa ge i. ~,4
2 3
3 , ,
1 page ·o ,

2
page3
page table 3 pag~ 2 "
logical
memory 4 :pag~ 1
5

7 :fagea_
_.,, ; ·, '

physical
memory
Figure 6.1 3 Paging Model of Lo
gical and Physical Memory
Th e pa ge siz e (like the frame
size) is def ine d by the hardware
va ryi ng be tw een 51 2 by tes an . The size of a page is a power of
d 1 GB pe r page, dep end ing on the 2,
of a po we r of 2 as a pa ge siz e com pu ter architecture. The selectio
ma kes the tra nsl ati on of a logica n
off set pa rti cu lar ly easy. If the l add res s into a page number and
siz e of the logical ad dre ss space page
the hig h-o rde r m - n bit s of a is 2m, an d a page size is 2n bytes,
logical ad dre ss designate the pag then
de sig na te the pa ge offset. Thus, e num ber , and the n low-order bits
the logical ad dre ss is as follows:
W he re p is an ind ex int o the pa
ge tab le an d d is the dis pla cem
ent wi thi n the page.
pa ge nu mb er pag e offset
p d
m -n
· Fig ur e 6.1 4 W he re p is an ind n
ex int o th e pa ge tab le and d is
the displacement within tbe
pa ge • ·
W he n we us e a pa gin g sch em .
e, we hav e no ex ter na l fragm
entation: any free frame c~n be .attocJted
to a Pro ce ss tha t ne ed s it. However, we ma y hav e som e mt . If t t' No tice that
f ni,s
ra
are all oc ate d as un its . If the me ern a rag men a ion. .
mo ry req uir em en ts of a pro ces · c1d .th pl~e
s do no t
e wt 0

bo un da rie s, the las t fra me all hap pen t~ co~n e size is 2,0~8
oca ted ma y no t be completely
ful l. Fo r exa ~p \e, if p c!ted 36 frames,
by tes a pro ce ss of 72 ,76 6 by
res ul; ing in int ern al fra gm ~n
tes will
ne ed 35 pa ge s plu s 1,086 bytes
. It will be al\o rocess would
tat ion of 2,0 48 - 1,086 = 96 2
bytes. In _the ~ .orS t case, a P
,.~,--J

· npages plus l byte. It would be allocated n + 1 frames, resulting In internal fragmen~1tlon of
need tanentire frame.
,JfOOS sssizeis independent of page size, we expect internal fragmentation to average one-half page
lfprot'f ss. This consideration suggests that small page sizes are desirable. However, overhead is
~ n each page~table entry, and this overhead is ~educed as the size of the page' increases. Als?,
1
• I/O is more efficient when the amo~nt of data bemg transferred is larger. However, overhead 1s
~tved in each page-table entry, and this overhead is reduced as the size of the page increases. Also,
::1/0is more efficient when the amount of data being transferred is larger.
free-frame list free-frame list
14 15 13
13
18
20 14
15
15 15
. pageO 16
paget
17 page2 17
a e3
~ewprocess 18
18
19
20
og
113
2 18
3 20
19
20

21 new-process page table 21

(a) (b)

Ftaure6.15: Free Frames (a) before Allocation and (b) After Allocation
-..:a~ amves in the system to be executed, its size, expressed in pages, is examined. Each
llwa.ef-,PJ'OCeSS needs one frame. Thus, if the process requires n pages, at least n frames must be
llailaWe loaemDry.·11 n frames are available, they are allocated to this arriving process. The first
"8eAftliepl19CeS& ts)oaded into one of the allocated frames, and the frame number is put in the page
....... pn>cess. The next page is loaded into another frame, its frame number is put into the
--~-1aton (Plgure6.15).

ls the clear separation between the programmer's view of me mory and


ll!U~. Tbe programmerviews memory as one single space, containing only this
•111111.., program ls scattered throughout physical memory, which also holds
between the programmer's view of memory and the actu..1J physici:ll
. . . . . .translation hardware. The logical addresses are tra nslated into
· la hidden from the programmer and is controllt>d by tht! operating
,....-...-by deftnltlon is unable to access memory it does not o wn. It has
' . '

e need only one copy of the editor (150 KB), plus 40 copi es of the SO KB of da ta s pace per
..a users, w
11-,ctr~tal space required is now 2,150 KB instead of 8,000 KB-a significa nt savings.
111! r heavily used program s can also be shared- compile rs, window systems, run-t ime libra ries,
()the1>3se systems, and so on. To be sharable , the code must be reentran t. The read-onl
y nature of
datared ode should not be left to the correctn ess of the code; the operatin g system should e nforce
sha c
this property.
e operating systems implem ent shared memory using shared pages. Organizi ng memory
::rding to pages provides numero us benefits in addition to allowing several processe s to share t he
same physical pages.
,..,...--r.r!"•~ , -..}- -~ ~ .•,..-,..,. .- , '"':" - ·-

~ ~ ges a~-~ Dis~dy antages of Paging

Advanta ges of Paging

Listed below are advantag es of paging:


• The paging techniqu e is easy to impleme nt.
• The paging techniqu e makes efficient utilizatio n of memory.
• The paging techniqu e support s time-sha ring system.
• The paging techniqu e support s non-con tiguous memory allocatio n

Listed below are disadvan tages of paging:


• Paging may encount er a problem called page break.
• When the number of pages in virtual memory is quite large, maintai ning page ta ble become
hectic.

;;;;;;;;;•
'Mien writing a program , a program mer thinks of it as a main program with a set of methods .
~ ures,orfu nctions. It may also include various data structures: objects, arrays, s tacks. va ri abl es,
. . so OD. Each of these modules or data element s is referred to by na me. The progra mm e r talks
•the stack," "the math library," and "the main program" without ca ring w ha t address es in
.. ....,11)'_ these elements occupy. She is not concern ed with wh eth er the s tack is s tored
before or
function. Segments vary in length, and the le ngth of each is in trinsically defined by its
m. Elements within a segment are identifie d by th ei r offset fro m t he beginni ng
ftrst ~temen tofthe program , the seventh stack fra me e ntry in the stac k, the fifth
,and so on.
· ry-management scheme that s upports this progra mmer vie w of me mory. A
~on of segment s. Each segm ent has a na me a nd a le ngth.
e segment name and the offset within the s egm ent.
t t; w&\i·

i1!11~ .
The progra1nmer therefore specifies each
address by two quantities: a segment
name and an offset. For simplicity of
implementation, segments are numbered
and are referred to by a segment number,
rather than by a segment name. Thus, a
logical address consists of a two tuple:
subroutine
[:]
<segment -number, offset>
symbol
Normally, when a program is compiled, table
the compiler automatically constructs
segments reflecting the input program. AC
compiler might create separate segments sqrt
for the following: main
program
1. The code
2. Global variables
3. The heap, from which memory is
allocated
4. The stacks used by each thread logical address
5. The standard Clibrary
Figure 6.19 : Programmer's View of a Program
Libraries that are linked in during compile time might be assigned separate segments. The loader
would take aU these segments and assign them segment numbers.
,.~...:-+- -c--vs <-.y-r.:, .... ( ~

. . . -~

-
ware ·:-;
_.................ft...
!.Hlt.'," · · "

Although the programmer can refer to objects in the program by a two-dimensional address, the
actual physical memory is still, of course, a one-dimensional sequence of bytes. Thus, we must define
an implementation to map two-dimensional programmer-defined addresses into one-dimensional.
physical addresses. This mapping is effected by a segment table. Each entry in the segment tablehas
a segment base and a segment limit. The segment base contains the starting physical address where
the segment resides in memory, and the segment limit specifies the length of the segment. The use of
a segment table is illustrated in Figure 6.20.
A logical address consists of two parts:
· • segment number, s,
.. • offset into that segment, d. .
• ..· ·Thes~gment.number is used as an index to the segment table. The offset d of the logical adJJr~ ~~~;~ ss
1 . --ti I lf1 ~). O
.. be between Oand thesegment limit. If It is not, we trap to the operating systern ( ogtl, ' ruJu~~
attempt beyond end of segment). When an offset is legal, it is added to the segnlt'nt b,iSe to ~rrJV~f
. the address in physical memory of the desired byte. The segment t.ible is thus CSSc'ntially JJl .
base -limit register pairs. ·

s{ ~
limit ase

segment
table
CPU s d

no

trap: addressing error physical memory

Figure 6.20: Segmentation Hardware.

-- - - - - - - - - - - - -,
0 0
MAIN : _O perating
·." System '
2000
FREE
I 3000
I

499
I .
MAIN
I
I SegmentO Limit Base 3500
Access
:o SUB 1
Addres! FREE
500 3000 Executable 4000
SUBl
-:-1 200 4000 Executable 4200
2 100 4800 Executable
FREE
Segment Map Table (SMT) 4 BOO SUB2
for process 1 4900
FREE

Main m e mory

.~

.,,~~J
re 6.21 : Example of Segmentation
Let us ·cons,.deran, r:o
· •mn,.lc for Sl'gnwnt,1tlon, suppose a Hi-hit :11hln•ss Is uw
segment numher nnd 12 bits for the d 1,vi. 11
sl'gmrnt offset so \Ill' m,,xlnu,m 1 ,i 1
·mu m num ber of segments that can be referr Sl'Klllt• nl sl1.i• Iii :;"' f1,r 'h~
max ed Is 1(1. The :ihtiVl' l'ig11n• sil o\ 111 1
' ation is done in case of
transl v~ hnwf, , ,1 i1 th,
segmentation.
When a program is loaded into me ·11hlr,,~1
mory, the segmentation system
h I rle s to l(l<" ;it e si) . ,
. eno ug to hold the first segment of the process, space •1( ( l h,1t ' '
lnf orm ;1tlon Is olit ;1i,wd fr( 1
ma1•ntained by memory manager. Then it tries llll l 1)~ f
~l.1r~,
to locate space for other segnw
space is located for all the segme nt s (l . rt't' l111
nts, it loads them into their respe · ' Ill C 1 \
ctive areas. 1
· ( 'q 11,11~
The operating system also genera
tes a segment map table for eac
h program.
With the help of segment map
. tables and hardware assistance
translate a logical address into phy , the operating sys tem
sical address on execution of. a c,1n e1s1\
program. ·· Y
The Segment number is mappe
• d to the segment table. The lim
compared wtth the offset If the • • • it of the res 1Jective , ,
offset 1s less than the hm . .,t.:g11
throws an error as the address is ,t the n the address is valid other1ent .
is
invalid. Wisell
In the case of valid addresses
, the base address of the segment
address of the actual word in the is added to the offset to get the phy
main memory. sic.,\
j I -) . - 1~r't:~:• -'~; .~•• {t•" ' ' ;
_·. ~,..~1!~:.;f':~~ in Segmentation
Each process is loaded by bring
ing all of its segments into main
is loaded into main memory by memory. Every seg ment of the proc
creating partitions dynamically ess
1bis aeates an exact fit for every ma tching the size of eac h segment
segment. Segmentation is free of
SfiB'eUtation suffers from exter intern al fragmentation . I luwewr,
nal fragmentation.
Every pmgram/process may occ
upy mo
partitloning. Consider a case where if a larre than one non-contiguous segment, simibr to dy11J11iic
ger segment is evicted and a sC'
• putinitsplace.Asthe new segme gment which is sm,ilkr is
nt is smaller it leaves an area in
.This Is called ste rn al fragmen the segment which remains unuSed.
tation.
· ·The 'holes' aeated by external
fragmentation are dealt with
: 'c:omp
· act -' Thi. ·
iou. sis a cos·tly process with a by implementin g a P1'l1ci:ss l\i\it'J
large overhead and hence must
not bc c-i II,t LI wr y often.
' " ·1, ~ ,"~:. ~ · ,~4i.

.. ~ 'ii~,:::( .., of Se1PDentation

l!IZh:~ - - - - - - -- - -- - __ __ __,. 1

i
ce in comparison to Page table
in p,1gin g. Ii
entire address space.
'
,
mpared to tho page table in pagii~ /
~- --- --- -
· 8Advant.APS of Segmentation

• As prot--e-sses are load_ed and removed from the memory, the free memory space is broken
into little pieces, causing External fragmentation.
• It is difficult to allocate contiguous memory to variable sized partition.
• Costly memory management algorithms, hence implementation cost is high.

- -~ ~ s .~!~~~n _Pagi_ng and Segmentation


'(be following are the differences between paging and segmentation as mentioned in the table below.
~

Paging Segmentation
; paging, the program is divided into fixed or In segmentation, the program is divided into
mounted size pages. variable size sections.
Page size is determined by hardware. Here, the section size is given by the user.
Paging could result in internal fragmentation. Segmentation could result in external
fragmentation.
In paging, the logical address is split into a page Here, the logical address is split into section
number and page offset. number and section offset.
Paging comprises a page table that encloses the While segmentation also comprises the segment
base address of every page. table which encloses segment number and
segment offset.
1he page table is employed to keep up the page Section Table maintains the section data.
data.
In paging, the operating system must maintain a In segmentation, the operating system mai ntains
he frame list. a list of holes in the main memory.
1'tsi7.e of the page needs always be equal to the There is no constraint on the s ize of segments.
..
~ -frames.
Table 6.1 Differences between Paging and Segmentation
.,·,it,·d !'aging

is a feature seen in some modern computers. The main memory is split into
ts, which are subsequently partitioned into smaller fixed-size disk p:.1ges. E::ich
table, and each process has many page tabl es.
· blformatlon for each segment page, whereas the segmt•nt tablt• h.ts. inform.nion
· tables are linked to segment tab les and segment t3blr.>s to individual p:.1ges
r -
t,I:;. .. . . t, W ,-,Mc,;+ - ~ :;; , ·1'.:t:ii;z:-, ~ '
Advantages are less memory is used, pag
e sizes are more flexible, memory all
-.. _ ,,
. ..
accessible. and an extra level of data access . .
secunty over pagmg. The pro ocat1on . ....
fragmentation. Pages are created from cess doe s not 1s rriri•
segments. cause •t
~>:terri,il
Implementation requires STR (segmen
t table register) and PMT (page map
address in this method consists of a seg table) E
ment number, a page number, and an offs
The segment number indexes into the et wi~hi ach Vittiat
· segment table, which returns the page
·
tha t segment. The page number is an ind table's ba n that P1ge.
ex into the page table, each item of whi se addr •
frame'. The physical address is obtained ch represe e:.s f,,r
by adding the PFN (page frame number)
result, addressing may be defined by the and the ~~~Page
function: 0
va=(s,p,d) As a
here,
va is the virtual address, virtual add ress

s determines the number of CPU s d


segments (size of ST),
trap
p determines the number .of +
s
bas e limit
pages per segment (size of
PT). + bas e limit
base limit
+frame
p frame
d determines page size -~____,......+ t------1 phys ical address

Segment Table

Page Table

. Figure 6.2 2 : Segmented Paging

ted Paging ,
~ .-..•.1.-v.....:.. .,:;..,~,,.1 :c...-,_-.. -w.~;I"

. - .
·.
The benefits of segmented paging are as
,
follows: . I

. · ·• ~~h segment is repr~se~ted by a single entry in the segme l


nt table. lt lowers memory use.
· · ·,°_:- •
0
.nie segme~t'size determines the·size ·of the Page Table.
,. , • ·, _~ : i1~ ~llmlnates-th~ bsue ~f ~~al ~agme'ntation. .

~ ~l~~·ented,~iareasfollows: ,: ._.
/ ,{ : 'i -~ ~ ~ . , f n ~P•&.'~::., . .
~inparecS fu ~ ~ ~mplexity levelII
. . ·: ,-_• -~ ~ 0.
- ·-. _,. . . ·.:.. . . . ,. . . sub stantially higher.
. ... . . ... . '

, ..;

You might also like