Chapter 3

Download as pdf or txt
Download as pdf or txt
You are on page 1of 45

27 January 2023 OS 1

– Memory is a large array of words or bytes each with its


own address.
– It is a repository of quickly accessible data shared by the
CPU and I/O devices.
– Memory is volatile storage device; it loses its contents
during system or power failure.
– Memory is becoming bigger and bigger so why do we
worry?
– Parkinson’s law:”programs expand to fill the memory
available to hold them.”
– Hence memory must be managed: done by memory
management.
27 January 2023 OS 2
The operating system is responsible for the
following activities in connection with memory
management:
– Allocate and deallocate the memory space as needed.
– Keep track of which parts of memory are currently being
used and by whom and which parts are free.
– Decide which process to load when memory space
becomes available.
– Swapping between main memory and disk when
memory is too small to hold all the processes.

27 January 2023 OS 3
Basic memory management
Mono programming with out swapping and paging
• Load one program in memory at a time and allow it to
use all of memory; sharing it only with the OS
• Three ways of organizing memory

27 January 2023 OS 4
(a) used in mainframe and minicomputers
(b) used on some palmtop computers and
embedded systems.
(c) used by early PCs; the portion of the system in
ROM is BIOS

27 January 2023 OS 5
Multiprogramming with fixed partitions
– Mono-programming is not used any more except
in simple embedded systems
– Reasons for multiprogramming
• To make it easier to program an application by splitting
it up into two or more processes
• Large computers often provide interactive services to
several people simultaneously, which requires the
ability to have more than one process in memory at
once in order to get reasonable performance
• Most process spend a substantial fraction of their time
waiting for disk I/O to complete; hence
multiprogramming increases CPU utilization.

27 January 2023 OS 6
– divide the memory into n(possibly unequal) partitions:
may be manually at boot time.
– When a job arrives, put it in a queue for the smallest
partition large enough to hold it.
– The queue can be separate for each partition or there
could be a single queue.
– Having separate queue is disadvantageous the queue
for larger partitions may be empty (partition 3) where
as the queue for small partitions is full

27 January 2023 OS 7
27 January 2023 OS 8
– For a single queue when the partition becomes free
• The job closest to the front of the queue that fits in it could
be loaded; may waste memory
• Search the whole queue and picks the largest job that fits;
may discriminate against small jobs which may even be more
important such as interactive jobs
• One solution is to reserve one small partition for such jobs or
• Have a rule saying that a job may not be skipped over more
than k times.

27 January 2023 OS 9
Fragmentation
• Fragmentation occurs in a dynamic memory allocation system
when most of the free blocks are too small to satisfy any
request. It is generally termed as inability to use the available
memory. it has two types
• Internal fragmentation: allocated memory is slightly larger
than the requested memory; this size difference is internal to
a partition, But not being used.
• External fragmentation: total memory space exists to satisfy
a request, but it is not contiguous
• External fragmentation can be reduced by memory
compaction
• Shuffle memory contents to place all free memory together in
one large block; but may take much CPU time
27 January 2023 OS 10
Binding of Instruction and data to
Memory
• Address binding can happen at three different
stages
– Compile time: if memory location known at prior,
absolute code can be generated but must
recompile the code if starting location changes
– Load time: relocatable code can be generated if
memory location is not known at compile time
– Execution time: binding delayed until run time if
the process can be moved during its execution
from one memory segment to another: need
hardware support for address maps.
27 January 2023 OS 11
Paging
• A solution to fragmentation problem is Paging.
• Paging is a memory management mechanism that
allows the physical address space of a process to be
non-contagious.
• Here physical memory is divided into blocks of
equal size called Pages.
• The pages belonging to a certain process are loaded
into available memory frames

27 January 2023 OS 12
Cont.…
• It is a fixed-size partitioning theme (scheme).
• In paging, both main memory and secondary memory are
divided into equal fixed-size partitions.
• The partitions of the secondary memory area unit and
main memory area unit are known as pages and frames
respectively.
• Paging is a memory management method accustomed
fetch processes from the secondary memory into the
main memory in the form of pages.
• in paging, each process is split into parts wherever the
size of every part is the same as the page size.
• The size of the last half could also be but the page size.
The pages of the process area unit hold on within the
frames of main memory relying upon their accessibility

27 January 2023 OS 13
27 January 2023 OS 14
Cont..
• Paging is a memory management scheme. Paging allows a
process to be stored in a memory in a non-
contiguous manner. Storing process in a non-contiguous
manner solves the problem of external fragmentation.
• For implementing paging the physical and logical memory
spaces are divided into the same fixed-sized blocks.
• These fixed-sized blocks of physical memory are
called frames, and the fixed-sized blocks of logical memory
are called pages.
• When a process needs to be executed the process pages
from logical memory space are loaded into the frames of
physical memory address space.
• Now the address generated by CPU for accessing the frame
is divided into two parts i.e. page number and page offset

27 January 2023 OS 15
Page Table

• A Page Table is the data structure used by a virtual


memory system in a computer operating system to
store the mapping between virtual
address and physical addresses.
• Virtual address is also known as Logical address
and is generated by the CPU. While Physical
address is the address that actually exists on
memory.

27 January 2023 OS 16
Segmentation
• It supports the user’s view of the memory.
• The process is divided into the variable size
segments and loaded to the logical memory
address space.
• The logical address space is the collection of
variable size segments. Each segment has
its name and length.
• For the execution, the segments from logical
memory space are loaded to the physical
memory space.
27 January 2023 OS 17
27 January 2023 OS 18
Segmentation with Paging

• Both paging and segmentation have their advantages


and disadvantages, it is better to combine these two
schemes to improve on each.
• The combined scheme is known as 'Page the Elements'.
Each segment in this scheme is divided into pages and
each segment is maintained in a page table.
• So the logical address is divided into following 3 parts :
• Segment numbers(S)
• Page number (P)
• The displacement or offset number (D)
27 January 2023 OS 19
27 January 2023 OS 20
BASIS FOR COMPARISON PAGING SEGMENTATION

Basic A page is of fixed block size. A segment is of variable size.

Fragmentation Paging may lead to internal fragmentation. Segmentation may lead to external
fragmentation.

Address The user specified address is divided by The user specifies each address by two
CPU into a page number and offset. quantities a segment number and the
offset (Segment limit).

Size The hardware decides the page size. The segment size is specified by the user.

Table Paging involves a page table that contains Segmentation involves the segment table
base address of each page. that contains segment number and offset
(segment length).

27 January 2023 OS 21
summery
• Paging and segmentation both are the memory
management schemes.
• Paging allows the memory to be divided into fixed sized
block whereas the segmentation, divides the memory
space into segments of the variable block size.
• Where the paging leads to internal fragmentation the
segmentation leads to external fragmentation.

27 January 2023 OS 22
Multiprogramming with variable
partitions-swapping
– Sometimes there may not be enough memory to hold
all the currently active processes
– Hence, some processes must be kept on the disk and
brought into the memory to run
– Two methods: swapping and virtual memory
– Swapping: brings each process in its entirety run it for a
while, then put it back onto disk.
– Virtual memory: allows the programs to run even when
they are partially in memory.

27 January 2023 OS 23
Data structures to keep track of memory
• There are two ways: bitmaps and linked list
• Bit maps
• Memory is divided into fixed size allocation units,
may be as small as a few words or as large as
several kilobytes.
• Corresponding to each allocation unit, there is a bit
in the bit map, which is 0 if the unit is free 1 if it is
occupied.
• In the figure tick marks show allocation units.
27 January 2023 OS 24
(a) A part of memory with five processes and three holes. The tick marks
show the memory allocation units. The shaded regions (0 in the bitmap) are
free. (b) The corresponding bitmap. (c) The same information as a list.

27 January 2023 OS 25
• The size of the allocation unit is an important design issue
a) The smaller the allocation unit the larger the bit map.
e.g., let an allocation unit be 4 bytes
– 32 bits of memory will require 1 bit of the map
– 32n bits of memory will require n bits for the map=>1/33 of
memory
b) The larger the allocation unit the larger the internal
fragmentation since a partition must be an even multiple of
allocation units where the minimum partition size is 1
allocation unit.
Linked List
– Maintain a linked list of allocated and free memory segments ,
where a segment is either a process or a hole between two
processes and contains a number of allocation units
– Each entry in the list consists of
• Type: Process(P) or Hole(H)
• The address at which it starts
• The length
• A pointer to the next entry OS
27 January 2023 26
• How to update a list
– A terminating process normally has two neighbors
(except the first and the last)
– These may be processes or holes, leading to the
following four combinations

– To facilitate merging, a doubly linked is used.


27 January 2023 OS 27
What is Demand Paging?

• The basic idea behind demand paging is that when


a process is swapped in, its pages are not swapped
in all at once.
• Rather they are swapped in only when the process
needs them(On demand).
• This is termed as lazy swapper, although a pager is
a more accurate term.

27 January 2023 OS 28
• When the process requires any of the page that is not loaded into the memory, a
page fault trap is triggered and following steps are followed,
1. The memory address which is requested by the process is first checked, to verify
the request made by the process.
2. If its found to be invalid, the process is terminated.
3. In case the request by the process is valid, a free frame is located, possibly from a
free-frame list, where the required page will be moved.
4. A new operation is scheduled to move the necessary page from disk to the
specified memory location. ( This will usually block the process on an I/O wait,
allowing some other process to use the CPU in the meantime. )
5. When the I/O operation is complete, the process's page table is updated with the
new frame number, and the invalid bit is changed to valid.
6. The instruction that caused the page fault must now be restarted from the
beginning.
Note: There are cases when no pages are loaded into the memory initially, pages are
only loaded when demanded by the process by generating page faults. This is
called Pure Demand Paging

27 January 2023 OS 29
• The pages that are not moved into the memory, are marked as invalid in
the page table.
• For an invalid entry the rest of the table is empty.
• In case of pages that are loaded in the memory, they are marked as valid
along with the information about where to find the swapped out page.

27 January 2023 OS 30
Page Replacement

• As studied in Demand Paging, only certain pages of a process are


loaded initially into the memory. This allows us to get more number
of processes into the memory at the same time. but what happens
when a process requests for more pages and no free memory is
available to bring them in. Following steps can be taken to deal with
this problem :
• Put the process in the wait queue, until any other process finishes
its execution thereby freeing frames.
• Or, remove some other process completely from the memory to
free frames.
• Or, find some pages that are not being used right now, move them
to the disk to get free frames. This technique is called Page
replacement and is most commonly used. We have some great
algorithms to carry on page replacement efficiently.

27 January 2023 OS 31
Basic Page Replacement algorithm

• Find the location of the page requested by ongoing process on the disk.
• Find a free frame. If there is a free frame, use it. If there is no free frame,
use a page-replacement algorithm to select any existing frame to be
replaced, such frame is known as victim frame.
• Write the victim frame to disk. Change all related page tables to indicate
that this page is no longer in memory.
• Move the required page and store it in the frame. Adjust all related page
and frame tables to indicate the change.
• Restart the process that was waiting for this page.
• FIFO Page Replacement
• A very simple way of Page replacement is FIFO (First in First Out)
• As new pages are requested and are swapped in, they are added to tail of
a queue and the page which is at the head becomes the victim.
• It’s not an effective way of page replacement but can be used for small
systems.
• LRU Page Replacement
27 January 2023 OS 32
cont
• Least recent used (LRU) page replacement
algorithm → this algorithm replaces the page
which has not been referred for a long time.

27 January 2023 OS 33
Memory allocation algorithms
– Assume that the linked list method is used and the
memory manager knows how much memory to
allocate. It use the following algorithm.
a) First fit: scan the list from the beginning until a
hole big enough to hold the process is found.
• The hole is then broken down into two 2 pieces, one
for the process and one for the unused memory
except the unlikely cases of an exact fit.
• It is a fast algorithm since it searches as little as
possible.

27 January 2023 OS 34
Cont..

27 January 2023 OS 35
Cont..
c) Best fit: Allocate the smallest hole that is big
enough.
• We must search the entire list, the list must
ordered by size.
• This strategy produces the smallest leftover hole.
• slower than first fit since it search the entire list.
• Although the aim was to find the hole that is
close enough, it results in more wasted memory
than first fit and next fit since it tends to fill up
memory with tiny, useless holes: external
fragmentation is high.
• Is Best-Fit really best?

27 January 2023 OS 36
Cont..

27 January 2023 OS 37
d) Worst fit: Allocate the largest hole. Again, we
must search the entire list, unless it is sorted by
size.
– This strategy produces the largest leftover hole,
which may be more useful than the smaller
leftover hole from a best-fit approach.‘
– Simulation also shows that its performance is not
good
– All four algorithms can be speed up by
maintaining separate lists for processes & holes
• The algorithm search holes not processes
• Price: moving a freed segment from the process list to
the hole list takes time.

27 January 2023 OS 38
Cont…

27 January 2023 OS 39
b) Next fit: search the hole that fits not
from the beginning but from where it
has stopped last
• simulation shows that it performs slightly
worse than first fit

27 January 2023 OS 40
Quick fit: Maintain a separate lists for some of
the more common sizes requested.
– Advantage: searching is very fast
– Disadvantage: when a partition is freed, finding its
neighbors to see if merge is possible is expensive.

27 January 2023 OS 41
Virtual memory
• Problems of previous memory management methods
– Internal/external fragmentation
– Impossible to run processes larger than the physical
memory available
– If one big process is in memory it can prevent other
processes from running.
Solution: Splitting a program into a piece is called Overlays.
– in the early days overlays were used when programs do
not fit into the available memory.
– First run overlay 0:when it finishes it calls another overlay
and so on.
– The programming design of overlay structure is complex

27 January 2023 OS 42
– Virtual memory: the basic idea is that the size of a
process (code+ data + stack) may exceed the
amount of physical memory available to hold it.
– The OS keeps those parts of the program currently
in use in memory and the rest on the disk with
pieces of the program being swapped between
disk and memory as needed.
– Virtual memory is the separation of logical memory
from physical memory

27 January 2023 OS 43
Cache memory
 Cache is the temporary memory officially termed “CPU cache
memory.” This chip-based feature of your computer lets you access
some information more quickly than if you access it from your
computer's main hard drive. This chip-based feature of your computer
lets you access some information more quickly than if you access it
from your computer’s main hard drive. The data from programs and
files you use the most is stored in this temporary memory, which is
also the fastest memory in your computer.
 CACHE VS RAM
 When your computer needs to access data quickly, but can’t find it in the
cache, it will look for it within the random access memory (RAM). RAM is
the main type of computer data storage that stores information and
program processes. It’s farther away from the CPU than cache memory
and isn’t as fast; cache is actually 100 times faster than standard RAM.
 If cache is so fast, why isn’t all data stored there? Cache storage is limited
and very expensive for its space, so it only makes sense to keep the most-
accessed data there and leave everything else to RAM.

27 January 2023 OS 44
Self-check
1. List basic memory management?
2. Discuss reasons for multiprogramming?
3. What are difference between Contiguous and non Contiguous memory
allocation
4. Given five memory partitions of 100 KB, 500 KB, 200 KB, 300 KB, and
600 KB (in order), how would each of the first-fit, best-fit, and worst-fit
algorithms place processes of 212 KB, 417 KB, 112 KB, and 426 KB (in
order)?Which algorithm makes the most efficient use of memory?
5. List and describe basic Page Replacement algorithm ?
6. Virtual memory benefit?

27 January 2023 OS 45

You might also like