Module4memory Mangement
Module4memory Mangement
Out Line
Memory Management: Basic bare machine, Resident monitor.
Multiprogramming with fixed partitions.
Multiprogramming with variable partitions.
Paging, Segmentation.
Paged segmentation.
Virtual memory concepts
Demand paging.
Performance of demand paging.
Page replacement algorithms.
Thrashing.
Cache memory organization
Locality of reference
Protection schemes
2
Memory Management
3
Hierarchy of Memory
CPU
Cache Memory
Main Memory
Secondary Memory
4
Hierarchy of Memory
Now, here the OS have two important
responsibilities:
1.Space Allocation: Now OS will decide
which process from the secondary memory
will get which area in main memory.
2.Address Translation i.e. from logical
address to physical address.
5
Basic Bare Machine
• Bare Machine is a logic hardware in the computer system which can
execute the programs in the processor without using the Operating
System.
• In the early days, before the Operating systems were developed, the
instructions were executed directly on the hardware without any
interfering software.
• Drawback was that the Bare Machine accepts the program and
instructions in Machine Language.
• Due to this, only the trained people who were qualified in the computer
field and were able to understand and instruct the computer in Machine
language were able to operate on a computer.
• Due to this reason, the Bare Machine was termed as inefficient and
cumbersome after the development of different Operating Systems.
6
Resident Monitor
7
Memory Allocation Techniques
Memory allocation
Techniques
Fixed Variable
partition partition Paging Segmentation
8
Memory Allocation Techniques
9
Contiguous Memory allocation: Fixed Partition
10
Contiguous Memory allocation: Fixed Partition
11
Contiguous Memory allocation: Variable Size Partition
12
Contiguous Memory allocation: Variable Size Partition
14
Memory Allocation Strategies
1. First Fit:
▪ In the first fit, the partition is allocated which is the first sufficient
block from the top of Main Memory.
▪ It scans memory from the beginning and chooses the first
available block that is large enough. Thus it allocates the first
partition that is large enough.
2. Best Fit:
▪ Allocate the process to the partition which is the first smallest
sufficient partition among the free available partition.
▪ It searches the entire list of partition to find the smallest partition
whose size is greater than or equal to the size of the process.
15
Memory Allocation Strategies
3. Worst Fit:
▪ Allocate the process to the partition which is the largest sufficient
among the freely available partitions available in the main
memory.
▪ It is opposite to the best-fit algorithm.
▪ It searches the entire list of partitions to find the largest partition
and allocate it to process.
16
Problem : Memory allocation Algorithm
17
18
19
20
Non-Contiguous Memory Allocation
21
Paging – Basic Idea and its Need
▪ Physical memory (Main Memory)is divided into fixed sized
block called frames.
▪ Logical address space (Secondary Memory) is divided into
blocks of fixed size called pages.
▪ Page and frame will be of same size.
▪ Whenever a process needs to get execute on CPU, its pages
are moved from hard disk to available frame in main
memory.
22
Paging
23
Advantages &Disadvantages of Paging
24
Example
25
Paging
Page Frame
number number
26
Paging
27
Translating Logical Address into Physical Address
28
Translating Logical Address into Physical Address
29
Page Table
Page Table-Page table is a data structure.
It maps the page number referenced by the CPU to the frame number
where that page is stored.
Characteristics-
Page table is stored in the main memory.
Number of entries in a page table = Number of pages in which the process
is divided.
Page Table Base Register (PTBR) contains the base address of page table.
Each process has its own independent page table.
30
Page Table
Page Table Base Register (PTBR) provides
the base address of the page table.
The base address of the page table is added
with the page number referenced by the CPU.
It gives the entry of the page table containing
the frame number where the referenced page is
stored.
31
Page Table Entry
A page table entry contains several information about the page.
The information contained in the page table entry varies from operating
system to operating system.
The most important information in a page table entry is frame number.
In general, each entry of a page table contains the following
information-
32
Paging :Important Formula
33
Paging :Important Formula
34
Problem based on Page table and Paging
35
Problem based on Page table and Paging
36
Problem based on Page table and Paging
37
Problem based on Page table and Paging
38
Over head in Paging
39
Problem on Optimal Page Size
40
Problem on Optimal Page Size
41
Practice Problem on Paging
42
Practice Problem on Paging
43
Practice Problem on Paging
44
Translation look aside buffer
Used to overcome the slower access problem.
TLB is page table cache, which is implemented in fast
associative memory.
Its cost is high so capacity is limited, thus only subset of
page table is kept in memory.
Each TLB contains a page number and a frame number
where the page is stored in the memory.
45
TLB - Working
47
TLB
48
TLB
▪ Effective Access Time calculation:
• Question: 80 percent hit ratio in TLB, if it takes 20 nanosecond to
search in TLB, 100 nanosecond to access main memory, then what
is the effective access time to find a page?
• A- Hit ratio is 80 percent and miss ratio is 20 percent
49
Multilevel Paging
50
Disadvantages of Paging
1. Additional memory reference
▪ Its required to read information from page table.
▪ Every instruction requires two memory accesses: one for page table,
and one for instruction or data.
2. Size of page table
▪ Page tables are too large to keep it in main memory.
▪ Page table contain all pages in logical address space thus larger process
page table will be large.
3. Internal fragmentation
▪ A process size may not be exactly of the page size.
▪ So some space would remain unoccupied in the last page of a process.
51 This result in internal fragmentation.
Segmentation
52
Segmentation
▪ Works on the user point of view, logical address space of any
process is a collection of code, data and stack.
▪ Here the logical address space of a process is divided into blocks of
varying size, called segments.
▪ Each segment contains a logical unit of process.
▪ When ever a process is to be executed, its segments are moved from
secondary storage to the main memory.
▪ Each segment is allocated a chunk of free memory of the size equal
to that segment.
▪ OS maintains one table known as segment table, for each process. It
includes size of segment and location in memory where the
53 segment has been loaded.
54
Segmentation
• Logical address is divided in to two parts:
1. Segment number: identifier for segment.
2. Offset: actual location within a segment.
55
Address Mapping in Segmentation
56
Address Mapping in Segmentation
57
Segmentation
58
Problem based on Segmentation
59
Problem based on Segmentation
60
Problem based on Segmentation
61
Virtual Memory
▪ A virtual memory is technique that allows a process to execute
even though it is partially loaded in main memory.
▪ The basic idea behind virtual memory is that the combined size
of the program, data, and stack may exceed the amount of
physical memory( main memory) available for it.
▪ The operating system keeps those parts of the program currently
in use in main memory, and the rest on the disk.
▪ These program-generated addresses are called virtual addresses
and form the virtual address space.
▪ MMU (Memory Management Unit) maps the virtual addresses
onto the physical memory addresses
62
Advantage of Virtual Memory
▪ Less number of I/O would be needed to load or swap each user
program into memory.
▪ A program would no longer be constrained by the amount of
physical memory that is available. User would be able to write
programs for an extremely large virtual address space.
▪ Each user program could take less physical memory, more
programs could be run the same time, with a corresponding
increase in CPU utilization and throughput.
63
Swapping
▪ Example : A system has a physical memory of size 32-MB.
Now suppose there are 5 process each having size 8MB that
all want to execute simultaneously. How it is possible???
▪ The solution is to use swapping. Swapping is technique in
which process are moved between main memory and
secondary memory or disk.
▪ Swapping use some portion of secondary memory as
backing store known as swapping area.
▪ Operation of moving process from memory to swap area
is called “swap out". And moving from swap area to
memory is known as “swap in”.
64
Swapping
65
What is Thrashing?
Thrashing in OS is a phenomenon that occurs in computer operating
systems when the system spends an excessive amount of time swapping
data between physical memory (RAM) and virtual memory (disk storage)
due to high memory demand and low available resources.
Thrashing can occur when there are too many processes running on a
system and not enough physical memory to accommodate them all.
As a result, the operating system must constantly swap pages of memory
between physical memory and virtual memory. This can lead to a
significant decrease in system performance, as the CPU is spending more
time swapping pages than it is actually executing code.
66
Thrashing
67
Thrashing
68
Thrashing
▪ The main causes of thrashing in an operating system are:
▪ 1. High degree of multiprogramming: When too many processes are running on a system, the operating
system may not have enough physical memory to accommodate them all. This can lead to thrashing, as the
operating system is constantly swapping pages of memory between physical memory and disk.
▪ 2. Lack of frames: Frames are the units of memory that are used to store pages of memory. If there are not
enough frames available, the operating system will have to swap pages of memory to disk, which can lead to
thrashing.
▪ 3. Page replacement policy: The page replacement policy is the algorithm that the operating system uses to
decide which pages of memory to swap to disk. If the page replacement policy is not effective, it can lead to
thrashing.
▪ 4. Insufficient physical memory: If the system does not have enough physical memory, it will have to swap
pages of memory to disk more often, which can lead to thrashing.
▪ 5. Inefficient memory management: If the operating system is not managing memory efficiently, it can lead
to fragmentation of physical memory, which can also lead to thrashing.
▪ 6. Poorly designed applications: Applications that use excessive memory or that have poor memory
management practices can also contribute to thrashing.
69
Thrashing
The following are some of the symptoms of thrashing in an operating system:
1. High CPU utilization: When a system is thrashing, the CPU is spending a lot of time
swapping pages of memory between physical memory and disk. This can lead to high
CPU utilization, even when the system is not performing any significant work.
2. Increased Disk Activity: When the system is Thrashing in os, the disk activity increases
significantly as the system tries to swap data between physical memory and virtual
memory.
3. High page fault rate: A page fault is an event that occurs when the CPU tries to access a
page of memory that is not currently in physical memory. Thrashing can cause a high
page fault rate, as the operating system is constantly swapping pages of memory
between physical memory and disk.
4. Slow Response Time: When the system is Thrashing in os, its response time slows
significantly.
70
Prevent from Thrashing
Adjust the swap file size: If the system swap file is not configured correctly, disk
thrashing can also happen to you.
Increase the amount of RAM: As insufficient memory can cause disk thrashing, one
solution is to add more RAM to the laptop. With more memory, your computer can
handle tasks easily and don't have to work excessively. Generally, it is the best long-
term solution.
Decrease the number of applications running on the computer: If there are too
many applications running in the background, your system resource will consume a
lot. And the remaining system resource is slow that can result in thrashing. So while
closing, some applications will release some resources so that you can avoid thrashing
to some extent.
Replace programs: Replace those programs that are heavy memory occupied with
71 equivalents that use less memory.
Virtual Memory
72
Demand Paging
▪ Demand paging is similar to a paging system with swapping where
processes may reside in secondary memory. When we want to execute
a process, we swap it into the main memory.
▪ Rather than swapping entire process into memory we use lazy
swapper.
▪ A lazy swapper never swaps page into memory, unless that page will
be needed.
▪ If some process is needs to be swap in, then pager(page table handler)
brings only those pages which will be used by process.
▪ Thus avoid reading unused pages and decrease swap time and amount
of physical memory needed.
73
Demand Paging
74
Demand Paging- Page Fault
If a process tries to access
Page table includes the
a page that is not in main
valid-invalid bit for each
memory then it causes
page entry.
page fault.
If the bit is valid then page
is currently available in to
the memory.
Pager will generate trap to
the OS, and tries to swap
in.
If it is set to invalid then
page is either invalid or not
present in main memory.
75
Demand Paging – Page Fault
Following steps are followed to manage page fault:
1.Check page table for the process to determine whether the reference is valid
or invalid.
2.If the page is invalid then terminate the process, but if the page is valid but
currently not available in main memory, then generate trap instruction.
3.OS determines the location of that page on swap area.
4.Then it will use free frame list to find out free frame. OS will schedule disk
operation to read desired page into newly allocated memory.
5.When disk read is complete modify the page table and set reference bit to
valid.
6.Restart the instruction.
76
Demand Paging – Page Fault
77
Page Replacement Policies - Need
78
Page Replacement Policies - Basics
1. Find the location of the desired page on
disk.
2. Find a free frame:
- If there is a free frame, use it.
- If there is no free frame, use a page
replacement algorithm to select a victim
frame
- Write victim frame to disk if
dirty
3. Bring the desired page into the (newly)
free frame; update the page and frame
tables
4. Continue the process by restarting the
79 instruction that caused the trap
Demand Paging in OS
80
Performance of demand Paging in OS
81
Working Process of Demand Paging
82
Advantages of Demand Paging in OS
83
Virtual Memory: Hardware and control structures
84
First In First Out (FIFO)
FIFO
85
First In First Out (FIFO)
Advantage
• Very simple.
• Easy to implement.
Disadvantage
• A page fetched into memory a long time ago may have now
fallen out of use.
• This reasoning will often be wrong, because there will often
be regions of program or data that are heavily used
throughout the life of a program.
• Those pages will be repeatedly paged in and out by the FIFO
algorithm.
86
First In First Out (FIFO)
87
First In First Out (FIFO)
▪ Reference String : 7,0,1,2,0,3,0,4,2,3,0,3,1,2,0
▪ 3 frames (3 pages can be in memory at a time)
F3 1 1 1 1 0 0 0 3 3 3 3 2 2
F2 0 0 0 0 3 3 3 2 2 2 2 1 1 1
F1 7 7 7 2 2 2 2 4 4 4 0 0 0 0 0
F F F F Hi F F F F F F hit F F hit
t
Page faults/Page Miss = 12 Page hit = 3
▪ Miss Ratio = Num of miss/ num of reference => (12/15)*100= 80%
▪ Hit Ratio = Num of hit/ num of reference => (03/15)*100= 20%
88
Optimal Page Replacement
89
Optimal Page Replacement
90
Optimal Page Replacement
Advantage:
Disadvantage:
91
Optimal Page Replacement Example 1
92
Optimal Page Replacement Example 2
93
Least Recently Used (LRU)
It is based on the observation that if pages that have
been heavily used in the last few instructions will
probably be heavily used again in the next few.
94
Least Recently Used (LRU)
95
Least Recently Used (LRU)
96
97
Belady’s Anomaly
98
Belady’s Anomaly
99
Belady’s Anomaly
10
10
10
10
10
10
10
10
10
10
11
11
Pros and Cons of Cache Memory
11
Protection in Operating System
The processes in an operating system must be protected from
one another’s activities.
Use various mechanisms to ensure that only processes that
have gained proper authorization from the operating system
can operate on the files, memory segments, CPU, and other
resources of a system.
Protection refers to mechanisms for controlling the access of
programs, processes, or users to the resources defined by a
computer system.
11
Need of Protection in OS
Various needs of protection in the operating system are as follows:
There may be security risks like unauthorized reading, writing,
modification, or preventing the system from working effectively
for authorized users.
It helps to ensure data security, process security, and program
security against unauthorized user access or program access.
It is important to ensure no access rights' breaches, no viruses, no
unauthorized access to the existing data.
Its purpose is to ensure that only the systems' policies access
programs, resources, and data.
11
Goals of Protection
Various goals of protection in the operating system are as follows:
The policies define how processes access the computer system's resources,
such as the CPU, memory, software, and even the operating system. It is
the responsibility of both the operating system designer and the app
programmer. Although, these policies are modified at any time.
11
Domains of Protection
11
11
Advantages
11
Disadvantages
12