0% found this document useful (0 votes)
20 views120 pages

Module4memory Mangement

Uploaded by

Sameer Najam
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
20 views120 pages

Module4memory Mangement

Uploaded by

Sameer Najam
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 120

Memory Management

Out Line
 Memory Management: Basic bare machine, Resident monitor.
 Multiprogramming with fixed partitions.
 Multiprogramming with variable partitions.
 Paging, Segmentation.
 Paged segmentation.
 Virtual memory concepts
 Demand paging.
 Performance of demand paging.
 Page replacement algorithms.
 Thrashing.
 Cache memory organization
 Locality of reference
 Protection schemes

2
Memory Management

3
Hierarchy of Memory

CPU

Cache Memory

Main Memory

Secondary Memory
4
Hierarchy of Memory
Now, here the OS have two important
responsibilities:
1.Space Allocation: Now OS will decide
which process from the secondary memory
will get which area in main memory.
2.Address Translation i.e. from logical
address to physical address.

5
Basic Bare Machine
• Bare Machine is a logic hardware in the computer system which can
execute the programs in the processor without using the Operating
System.
• In the early days, before the Operating systems were developed, the
instructions were executed directly on the hardware without any
interfering software.
• Drawback was that the Bare Machine accepts the program and
instructions in Machine Language.
• Due to this, only the trained people who were qualified in the computer
field and were able to understand and instruct the computer in Machine
language were able to operate on a computer.
• Due to this reason, the Bare Machine was termed as inefficient and
cumbersome after the development of different Operating Systems.
6
Resident Monitor

7
Memory Allocation Techniques

Memory allocation
Techniques

Contiguous memory Non-Contiguous


allocation memory allocation

Fixed Variable
partition partition Paging Segmentation

8
Memory Allocation Techniques

Contiguous Memory allocation

 Simple and old method.


 Here each process occupies contiguous block of
main memory.
 When process is brought in memory, a memory is
searched to find out a chunk of free memory
having enough size to hold a process.

9
Contiguous Memory allocation: Fixed Partition

Multi programming with fixed partition

 Numbers of partitions are fixed.


 Here, memory is divided into fixed size partition.
 Each partition may contain exactly one process.
 Size of each partition is not requires to be same.
 When a partition is free, process is selected from
the input queue and it is loaded into free partition.

10
Contiguous Memory allocation: Fixed Partition

Multi programming with fixed partition:


• Advantage:
 Implementation is simple.
 Processing overhead is low.
 Disadvantage:
 Limit in process size.
 Degree of multiprogramming is also limited.
 Causes External fragmentation because of contiguous
memory allocation.
 Causes Internal fragmentation due to fixed partition of
memory.

11
Contiguous Memory allocation: Variable Size Partition

Multi programming with variable/dynamic partition

 Here memory is not divided into fixed partition, also the


number of partition is not fixed.
 Only required memory is allocated to process at runtime.
 Whenever any process enter in a system, a chunk of memory
big enough to fit the process is found and allocated. And the
remaining unoccupied space is treated as another free partition.
 When process get terminated it releases the space occupied and
it that free partition is contiguous to another free partition then
that both free partition can be merge.

12
Contiguous Memory allocation: Variable Size Partition

Multi programming with variable/dynamic partition:


 Advantage:
 No internal fragmentation.
 No limitation on number of processes.
 No limitation on process size.
 Disadvantage:
 Causes External fragmentation:
 Memory is allocated when process enters into system, and
deallocated when terminates. This operation may leads to
small holes in the memory.
 This holes will be so small that no process can be loaded in it..
 But total size of all holes may be big enough to hold any
process.
13
Memory Allocation Strategies
▪ In Partition Allocation, when there is more than one partition freely
available to accommodate a process’s request, a partition must be
selected.
▪ To choose a particular partition, a partition allocation method is needed.
A partition allocation method is considered better if it avoids internal
fragmentation.
▪ There are different Memory allocation Algorithm are:
1. First Fit
2. Best Fit
3. Worst Fit

14
Memory Allocation Strategies
1. First Fit:
▪ In the first fit, the partition is allocated which is the first sufficient
block from the top of Main Memory.
▪ It scans memory from the beginning and chooses the first
available block that is large enough. Thus it allocates the first
partition that is large enough.
2. Best Fit:
▪ Allocate the process to the partition which is the first smallest
sufficient partition among the free available partition.
▪ It searches the entire list of partition to find the smallest partition
whose size is greater than or equal to the size of the process.
15
Memory Allocation Strategies
3. Worst Fit:
▪ Allocate the process to the partition which is the largest sufficient
among the freely available partitions available in the main
memory.
▪ It is opposite to the best-fit algorithm.
▪ It searches the entire list of partitions to find the largest partition
and allocate it to process.

16
Problem : Memory allocation Algorithm

17
18
19
20
Non-Contiguous Memory Allocation

21
Paging – Basic Idea and its Need
▪ Physical memory (Main Memory)is divided into fixed sized
block called frames.
▪ Logical address space (Secondary Memory) is divided into
blocks of fixed size called pages.
▪ Page and frame will be of same size.
▪ Whenever a process needs to get execute on CPU, its pages
are moved from hard disk to available frame in main
memory.

22
Paging

23
Advantages &Disadvantages of Paging

24
Example

25
Paging

Page Frame
number number

26
Paging

27
Translating Logical Address into Physical Address

28
Translating Logical Address into Physical Address

29
Page Table
Page Table-Page table is a data structure.
It maps the page number referenced by the CPU to the frame number
where that page is stored.
Characteristics-
Page table is stored in the main memory.
Number of entries in a page table = Number of pages in which the process
is divided.
Page Table Base Register (PTBR) contains the base address of page table.
Each process has its own independent page table.

30
Page Table
Page Table Base Register (PTBR) provides
the base address of the page table.
The base address of the page table is added
with the page number referenced by the CPU.
It gives the entry of the page table containing
the frame number where the referenced page is
stored.

31
Page Table Entry
 A page table entry contains several information about the page.
 The information contained in the page table entry varies from operating
system to operating system.
 The most important information in a page table entry is frame number.
 In general, each entry of a page table contains the following
information-

32
Paging :Important Formula

33
Paging :Important Formula

34
Problem based on Page table and Paging

35
Problem based on Page table and Paging

36
Problem based on Page table and Paging

37
Problem based on Page table and Paging

38
Over head in Paging

39
Problem on Optimal Page Size

40
Problem on Optimal Page Size

41
Practice Problem on Paging

42
Practice Problem on Paging

43
Practice Problem on Paging

44
Translation look aside buffer
 Used to overcome the slower access problem.
 TLB is page table cache, which is implemented in fast
associative memory.
 Its cost is high so capacity is limited, thus only subset of
page table is kept in memory.
 Each TLB contains a page number and a frame number
where the page is stored in the memory.

45
TLB - Working

▪ Whenever a logical address is generated, the page number of logical address is


searched in the TLB.
▪ If the page number is found then it is know as TLB hit. In this case
corresponding frame number is fetched from TLB entry and used to get
physical address. The whole task may take bit longer than it would if an
unmapped memory reference were used.
▪ If a match is not found then it is termed as TLB miss, in this case a memory
reference to that page must be made. page table is used to get the frame
number. And this entry id moved to TLB.
▪ If TLB is full while moving the entry then some of the existing entry in the
TLB ae removed. (strategy can be Least Recently Used(LRU) to random).
46
TLB - Working

47
TLB

48
TLB
▪ Effective Access Time calculation:
• Question: 80 percent hit ratio in TLB, if it takes 20 nanosecond to
search in TLB, 100 nanosecond to access main memory, then what
is the effective access time to find a page?
• A- Hit ratio is 80 percent and miss ratio is 20 percent

▪ Effective access time = H(TLB+MM )+ M(TLB+PT+MM)


= H(TLB+MM )+ M(TLB+2MM)
= 0.8(20+100) + 0.2(20+100+100)

49
Multilevel Paging

50
Disadvantages of Paging
1. Additional memory reference
▪ Its required to read information from page table.
▪ Every instruction requires two memory accesses: one for page table,
and one for instruction or data.
2. Size of page table
▪ Page tables are too large to keep it in main memory.
▪ Page table contain all pages in logical address space thus larger process
page table will be large.
3. Internal fragmentation
▪ A process size may not be exactly of the page size.
▪ So some space would remain unoccupied in the last page of a process.
51 This result in internal fragmentation.
Segmentation

52
Segmentation
▪ Works on the user point of view, logical address space of any
process is a collection of code, data and stack.
▪ Here the logical address space of a process is divided into blocks of
varying size, called segments.
▪ Each segment contains a logical unit of process.
▪ When ever a process is to be executed, its segments are moved from
secondary storage to the main memory.
▪ Each segment is allocated a chunk of free memory of the size equal
to that segment.
▪ OS maintains one table known as segment table, for each process. It
includes size of segment and location in memory where the
53 segment has been loaded.
54
Segmentation
• Logical address is divided in to two parts:
1. Segment number: identifier for segment.
2. Offset: actual location within a segment.

55
Address Mapping in Segmentation

56
Address Mapping in Segmentation

57
Segmentation

58
Problem based on Segmentation

59
Problem based on Segmentation

60
Problem based on Segmentation

61
Virtual Memory
▪ A virtual memory is technique that allows a process to execute
even though it is partially loaded in main memory.
▪ The basic idea behind virtual memory is that the combined size
of the program, data, and stack may exceed the amount of
physical memory( main memory) available for it.
▪ The operating system keeps those parts of the program currently
in use in main memory, and the rest on the disk.
▪ These program-generated addresses are called virtual addresses
and form the virtual address space.
▪ MMU (Memory Management Unit) maps the virtual addresses
onto the physical memory addresses
62
Advantage of Virtual Memory
▪ Less number of I/O would be needed to load or swap each user
program into memory.
▪ A program would no longer be constrained by the amount of
physical memory that is available. User would be able to write
programs for an extremely large virtual address space.
▪ Each user program could take less physical memory, more
programs could be run the same time, with a corresponding
increase in CPU utilization and throughput.

63
Swapping
▪ Example : A system has a physical memory of size 32-MB.
Now suppose there are 5 process each having size 8MB that
all want to execute simultaneously. How it is possible???
▪ The solution is to use swapping. Swapping is technique in
which process are moved between main memory and
secondary memory or disk.
▪ Swapping use some portion of secondary memory as
backing store known as swapping area.
▪ Operation of moving process from memory to swap area
is called “swap out". And moving from swap area to
memory is known as “swap in”.
64
Swapping

65
What is Thrashing?
 Thrashing in OS is a phenomenon that occurs in computer operating
systems when the system spends an excessive amount of time swapping
data between physical memory (RAM) and virtual memory (disk storage)
due to high memory demand and low available resources.

 Thrashing can occur when there are too many processes running on a
system and not enough physical memory to accommodate them all.
 As a result, the operating system must constantly swap pages of memory
between physical memory and virtual memory. This can lead to a
significant decrease in system performance, as the CPU is spending more
time swapping pages than it is actually executing code.
66
Thrashing

67
Thrashing

▪ Due to this performance of the system is decreased .


▪ After a certain limit thrashing occurs.
P1 – page1
▪ To avoid this problem : P2– page1
i. Increase the main memory size P3 – page1
P4 – page1
ii. Efficiently use long term scheduler.

P100 – page1

68
Thrashing
▪ The main causes of thrashing in an operating system are:
▪ 1. High degree of multiprogramming: When too many processes are running on a system, the operating
system may not have enough physical memory to accommodate them all. This can lead to thrashing, as the
operating system is constantly swapping pages of memory between physical memory and disk.

▪ 2. Lack of frames: Frames are the units of memory that are used to store pages of memory. If there are not
enough frames available, the operating system will have to swap pages of memory to disk, which can lead to
thrashing.

▪ 3. Page replacement policy: The page replacement policy is the algorithm that the operating system uses to
decide which pages of memory to swap to disk. If the page replacement policy is not effective, it can lead to
thrashing.

▪ 4. Insufficient physical memory: If the system does not have enough physical memory, it will have to swap
pages of memory to disk more often, which can lead to thrashing.

▪ 5. Inefficient memory management: If the operating system is not managing memory efficiently, it can lead
to fragmentation of physical memory, which can also lead to thrashing.

▪ 6. Poorly designed applications: Applications that use excessive memory or that have poor memory
management practices can also contribute to thrashing.
69
Thrashing

The following are some of the symptoms of thrashing in an operating system:
1. High CPU utilization: When a system is thrashing, the CPU is spending a lot of time
swapping pages of memory between physical memory and disk. This can lead to high
CPU utilization, even when the system is not performing any significant work.
2. Increased Disk Activity: When the system is Thrashing in os, the disk activity increases
significantly as the system tries to swap data between physical memory and virtual
memory.
3. High page fault rate: A page fault is an event that occurs when the CPU tries to access a
page of memory that is not currently in physical memory. Thrashing can cause a high
page fault rate, as the operating system is constantly swapping pages of memory
between physical memory and disk.
4. Slow Response Time: When the system is Thrashing in os, its response time slows
significantly.
70
Prevent from Thrashing
 Adjust the swap file size: If the system swap file is not configured correctly, disk
thrashing can also happen to you.
 Increase the amount of RAM: As insufficient memory can cause disk thrashing, one
solution is to add more RAM to the laptop. With more memory, your computer can
handle tasks easily and don't have to work excessively. Generally, it is the best long-
term solution.
 Decrease the number of applications running on the computer: If there are too
many applications running in the background, your system resource will consume a
lot. And the remaining system resource is slow that can result in thrashing. So while
closing, some applications will release some resources so that you can avoid thrashing
to some extent.
 Replace programs: Replace those programs that are heavy memory occupied with
71 equivalents that use less memory.
Virtual Memory

Virtual memory can


be implemented in
following three
ways:

Demand Segmentation with


Demand paging
Segmentation Paging

72
Demand Paging
▪ Demand paging is similar to a paging system with swapping where
processes may reside in secondary memory. When we want to execute
a process, we swap it into the main memory.
▪ Rather than swapping entire process into memory we use lazy
swapper.
▪ A lazy swapper never swaps page into memory, unless that page will
be needed.
▪ If some process is needs to be swap in, then pager(page table handler)
brings only those pages which will be used by process.
▪ Thus avoid reading unused pages and decrease swap time and amount
of physical memory needed.
73
Demand Paging

74
Demand Paging- Page Fault
If a process tries to access
Page table includes the
a page that is not in main
valid-invalid bit for each
memory then it causes
page entry.
page fault.
If the bit is valid then page
is currently available in to
the memory.
Pager will generate trap to
the OS, and tries to swap
in.
If it is set to invalid then
page is either invalid or not
present in main memory.

75
Demand Paging – Page Fault
Following steps are followed to manage page fault:
1.Check page table for the process to determine whether the reference is valid
or invalid.
2.If the page is invalid then terminate the process, but if the page is valid but
currently not available in main memory, then generate trap instruction.
3.OS determines the location of that page on swap area.
4.Then it will use free frame list to find out free frame. OS will schedule disk
operation to read desired page into newly allocated memory.
5.When disk read is complete modify the page table and set reference bit to
valid.
6.Restart the instruction.

76
Demand Paging – Page Fault

77
Page Replacement Policies - Need

78
Page Replacement Policies - Basics
1. Find the location of the desired page on
disk.
2. Find a free frame:
- If there is a free frame, use it.
- If there is no free frame, use a page
replacement algorithm to select a victim
frame
- Write victim frame to disk if
dirty
3. Bring the desired page into the (newly)
free frame; update the page and frame
tables
4. Continue the process by restarting the
79 instruction that caused the trap
Demand Paging in OS

80
Performance of demand Paging in OS

81
Working Process of Demand Paging

82
Advantages of Demand Paging in OS

83
Virtual Memory: Hardware and control structures

 Virtual memory involves the separation of logical


memory perceived(aware) by user from physical
memory.
 This separation allows extremely large virtual memory
to be provided for programmers when only smaller
amount of physical memory is available.
 Thus programmer need not to worry about the amount
of memory available.

84
First In First Out (FIFO)
FIFO

85
First In First Out (FIFO)
Advantage

• Very simple.
• Easy to implement.

Disadvantage

• A page fetched into memory a long time ago may have now
fallen out of use.
• This reasoning will often be wrong, because there will often
be regions of program or data that are heavily used
throughout the life of a program.
• Those pages will be repeatedly paged in and out by the FIFO
algorithm.
86
First In First Out (FIFO)

87
First In First Out (FIFO)
▪ Reference String : 7,0,1,2,0,3,0,4,2,3,0,3,1,2,0
▪ 3 frames (3 pages can be in memory at a time)
F3 1 1 1 1 0 0 0 3 3 3 3 2 2
F2 0 0 0 0 3 3 3 2 2 2 2 1 1 1
F1 7 7 7 2 2 2 2 4 4 4 0 0 0 0 0
F F F F Hi F F F F F F hit F F hit
t
Page faults/Page Miss = 12 Page hit = 3
▪ Miss Ratio = Num of miss/ num of reference => (12/15)*100= 80%
▪ Hit Ratio = Num of hit/ num of reference => (03/15)*100= 20%
88
Optimal Page Replacement

Its best page replacement policy.

The Optimal policy selects for replacement the page


that will not be used for longest period of time.

Impossible to implement (need to know the future) but


serves as a standard to compare with the other
algorithms we shall study.

89
Optimal Page Replacement

Need an approximation of how likely each


frame is to be accessed in the future

If we base this on past behavior we got


a way to track future behavior

Tracking memory accesses requires


hardware support to be efficient

90
Optimal Page Replacement
Advantage:

• Lowest page faults.


• Can Improves performance of system as
it reduces number of page faults so
requires less swapping.

Disadvantage:

• Very difficult to implement.

91
Optimal Page Replacement Example 1

92
Optimal Page Replacement Example 2

93
Least Recently Used (LRU)
It is based on the observation that if pages that have
been heavily used in the last few instructions will
probably be heavily used again in the next few.

Conversely, pages that have not been used for ages


will probably remain unused for a long time.

This idea suggests a realizable algorithm: when a page


fault occurs, throw out the page that has been unused
for the longest time.

94
Least Recently Used (LRU)

95
Least Recently Used (LRU)

96
97
Belady’s Anomaly

98
Belady’s Anomaly

99
Belady’s Anomaly

10
10
10
10
10
10
10
10
10
10
11
11
Pros and Cons of Cache Memory

11
Protection in Operating System
 The processes in an operating system must be protected from
one another’s activities.
 Use various mechanisms to ensure that only processes that
have gained proper authorization from the operating system
can operate on the files, memory segments, CPU, and other
resources of a system.
 Protection refers to mechanisms for controlling the access of
programs, processes, or users to the resources defined by a
computer system.

11
Need of Protection in OS
 Various needs of protection in the operating system are as follows:
 There may be security risks like unauthorized reading, writing,
modification, or preventing the system from working effectively
for authorized users.
 It helps to ensure data security, process security, and program
security against unauthorized user access or program access.
 It is important to ensure no access rights' breaches, no viruses, no
unauthorized access to the existing data.
 Its purpose is to ensure that only the systems' policies access
programs, resources, and data.

11
Goals of Protection
 Various goals of protection in the operating system are as follows:
 The policies define how processes access the computer system's resources,
such as the CPU, memory, software, and even the operating system. It is
the responsibility of both the operating system designer and the app
programmer. Although, these policies are modified at any time.

 Protection is a technique for protecting data and processes from harmful or


intentional infiltration. It contains protection policies either established by
itself, set by management or imposed individually by programmers to
ensure that their programs are protected to the greatest extent possible.

 It also provides a multiprogramming OS with the security that its users


expect when sharing common space such as files or directories.
11
Role of Protection in OS

11
Domains of Protection

11
11
Advantages

11
Disadvantages

12

You might also like