0% found this document useful (0 votes)
9 views21 pages

OS Unit-3

The document discusses memory management strategies in operating systems, focusing on swapping, contiguous memory allocation, fragmentation, and virtual memory management. It explains various memory allocation techniques, including fixed and variable-sized partitioning, as well as the concepts of internal and external fragmentation. Additionally, it covers demand paging, file concepts, and access methods for files, highlighting the importance of efficient memory usage and protection mechanisms.

Uploaded by

maneyacx
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views21 pages

OS Unit-3

The document discusses memory management strategies in operating systems, focusing on swapping, contiguous memory allocation, fragmentation, and virtual memory management. It explains various memory allocation techniques, including fixed and variable-sized partitioning, as well as the concepts of internal and external fragmentation. Additionally, it covers demand paging, file concepts, and access methods for files, highlighting the importance of efficient memory usage and protection mechanisms.

Uploaded by

maneyacx
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 21

MEMORY MANAGEMENT

Main Memory Management Strategies

Every program to be executed has to be executed must be in memory. The instruction must be
fetched from memory before it is executed.
In multi-tasking OS memory management is complex, because as processes are swapped in and out of
the CPU, their code and data must be swapped in and out of memory.

Swapping

• A process must be loaded into memory in order to execute.


• If there is not enough memory available to keep all running processes in memory at the same
time, then some processes that are not currently using the CPU may have their memory swapped
out to a fast local disk called the backing store.
• Swapping is the process of moving a process from memory to backing store and moving another
process from backing store to memory. Swapping is a very slow process compared to other
operations.
• A variant of swapping policy is used for priority-based scheduling algorithms. If a higher-
priority process arrives and wants service, the memory manager can swap out the lower-priority
process and then load and execute the higher-priority process. When the higher-priority process
finishes, the lower-priority process can be swapped back in and continued. This variant of
swapping is called roll out, roll in.

Figure: Swapping of two processes using a disk as a backing store

Contiguous Memory Allocation

• The main memory must accommodate both the operating system and the various user
processes. Therefore we need to allocate the parts of the main memory in the most efficient
way possible.
• Memory is usually divided into 2 partitions: One for the resident OS. One for the user
processes.
• Each process is contained in a single contiguous section of memory.
1. Memory Allocation

Two types of memory partitioning are:


1. Fixed-sized partitioning
2. Variable-sized partitioning

1. Fixed-sized Partitioning

• The memory is divided into fixed-sized partitions.


• Each partition may contain exactly one process.
• The degree of multiprogramming is bound by the number of partitions.
• When a partition is free, a process is selected from the input queue and loaded into the free
partition.
• When the process terminates, the partition becomes available for another process.

2. Variable-sized Partitioning

• The OS keeps a table indicating which parts of memory are available and which parts are
occupied.
• A hole is a block of available memory. Normally, memory contains a set of holes of
various sizes.
• Initially, all memory is available for user-processes and considered one large hole.
• When a process arrives, the process is allocated memory from a large hole.
• If we find the hole, we allocate only as much memory as is needed and keep the
remaining memory available to satisfy future requests.

Three strategies used to select a free hole from the set of available holes:

1. First Fit: Allocate the first hole that is big enough. Searching can start either at the
beginning of the set of holes or at the location where the previous first-fit search ended.

2. Best Fit: Allocate the smallest hole that is big enough. We must search the entire list,
unless the list is ordered by size. This strategy produces the smallest leftover hole.

3. Worst Fit: Allocate the largest hole. Again, we must search the entire list, unless it is
sorted by size. This strategy produces the largest leftover hole.

First-fit and best fit are better than worst fit in terms of decreasing time and storage utilization.

2. Fragmentation

Two types of memory fragmentation:


1. Internal fragmentation
2. External fragmentation

1. Internal Fragmentation
• The general approach is to break the physical-memory into fixed-sized blocks and
allocate memory in units based on block size.
• The allocated-memory to a process may be slightly larger than the requested-memory.
• The difference between requested-memory and allocated-memory is called internal
fragmentation i.e. Unused memory that is internal to a partition.

2. External Fragmentation
• External fragmentation occurs when there is enough total memory-space to satisfy a request
but the available-spaces are not contiguous. (i.e. storage is fragmented into a large number of
small holes).
• Both the first-fit and best-fit strategies for memory-allocation suffer from external
fragmentation.
• Statistical analysis of first-fit reveals that given N allocated blocks, another 0.5 N blocks will
be lost to fragmentation. This property is known as the 50-percent rule.
3. Memory Mapping and Protection
• Memory-protection means protecting OS from user-process and protecting user-
processes from one another.
• Memory-protection is done using
1. Relocation-register: contains the value of the smallest physical-address.
2. Limit-register: contains the range of logical-addresses.
• Each logical-address must be less than the limit-register.
• The MMU maps the logical-address dynamically by adding the value in the relocation-
register. This mapped-address is sent to memory
• When the CPU scheduler selects a process for execution, the dispatcher loads the
relocation and limit-registers with the correct values.
• Because every address generated by the CPU is checked against these registers, we can
protect the OS from the running-process.
• The relocation-register scheme provides an effective way to allow the OS size to change
dynamically.
Transient OS code: Code that comes & goes as needed to save memory-space and overhead for
unnecessary swapping.
Figure: Hardware support for relocation and limit-registers

Segmentation

Basic Method of Segmentation


• This is a memory-management scheme that supports user-view of memory (Figure 1).
• A logical-address space is a collection of segments.
• Each segment has a name and a length.
• The addresses specify both segment-name and offset within the segment.
• Normally, the user-program is compiled, and the compiler automatically constructs
segments reflecting the input program.
For ex: The code, Global variables, The heap, from which memory is allocated, The stacks used
by each thread, The standard C library

Figure: Programmer’s view of a Program

Hardware support for Segmentation


• Segment-table maps 2 dimensional user-defined addresses into one-dimensional physical
addresses.
• In the segment-table, each entry has following 2 fields:
1. Segment-base contains starting physical-address where the segment resides in
memory.
2. Segment-limit specifies the length of the segment (Figure 2).
• A logical-address consists of 2 parts:
1. Segment-number(s) is used as an index to the segment-table
2. Offset(d) must be between 0 and the segment-limit.
• If offset is not between 0 & segment-limit, then we trap to the OS(logical-addressing
attempt beyond end of segment).
• If offset is legal, then it is added to the segment-base to produce the physical-memory
address. Figure: Segmentation hardware

Structure of the Page Table

The most common techniques for structuring the page table:


1. Hierarchical Paging
2. Hashed Page-tables
3. Inverted Page-tables

1. Hierarchical Paging
• Problem: Most computers support a large logical-address space (232 to 264). In these
systems, the page-table itself becomes excessively large.
• Solution: Divide the page-table into smaller pieces.

Two Level Paging Algorithm:


• The page-table itself is also paged.
• This is also known as a forward-mapped page-table because address translation works from
the outer page-table inwards.
Figure: A two-level page-table scheme
2. Hashed Page Tables
• This approach is used for handling address spaces larger than 32 bits.
• The hash-value is the virtual page-number.
• Each entry in the hash-table contains a linked-list of elements that hash to the same
location (to handle collisions).
• Each element consists of 3 fields:
1. Virtual page-number
2. Value of the mapped page-frame and
3. Pointer to the next element in the linked-list.

The algorithm works as follows:


• The virtual page-number is hashed into the hash-table.
• The virtual page-number is compared with the first element in the linked-list.
• If there is a match, the corresponding page-frame (field 2) is used to form the desired
physical-address.
• If there is no match, subsequent entries in the linked-list are searched for a matching
virtual page-number.

Figure: Hashed page-table


3. Inverted Page Tables
• Has one entry for each real page of memory.
• Each entry consists of virtual-address of the page stored in that real memory-location and
information about the process that owns the page.
• Each virtual-address consists of a triplet <process-id, page-number, offset>.
• Each inverted page-table entry is a pair <process-id, page-number>

Figure: Inverted page-table

The algorithm works as follows:


1. When a memory-reference occurs, part of the virtual-address, consisting of <process-id,
page-number>, is presented to the memory subsystem.
2. The inverted page-table is then searched for a match.
3. If a match is found, at entry i-then the physical-address <i, offset> is generated.
4. If no match is found, then an illegal address access has been attempted.

Advantage:
1. Decreases memory needed to store each page-table

Disadvantages:
2. Increases amount of time needed to search table when a page reference occurs.
3. Difficulty implementing shared-memory
Figure: Inverted page-table

The algorithm works as follows:


5. When a memory-reference occurs, part of the virtual-address, consisting of <process-id,
page-number>, is presented to the memory subsystem.
6. The inverted page-table is then searched for a match.
7. If a match is found, at entry i-then the physical-address <i, offset> is generated.
8. If no match is found, then an illegal address access has been attempted.

Advantage:
2. Decreases memory needed to store each page-table

Disadvantages:
4. Increases amount of time needed to search table when a page reference occurs.
Difficulty implementing shared-memory

VIRTUAL MEMORYMANAGEMENT
• Virtual memory is a technique that allows for the execution of partially loaded process.
• Advantages:
▪ A program will not be limited by the amount of physical memory that is available
user can able to write in to large virtual space.
▪ Since each program takes less amount of physical memory, more than one program
could be run at the same time which can increase the throughput and CPU
utilization.
▪ Less i/o operation is needed to swap or load user program in to memory. So each
user program could run faster.
Fig: Virtual memory that is larger than physical memory.

• Virtual memory is the separation of users logical memory from physical memory. This
separation allows an extremely large virtual memory to be provided when these is less
physical memory.
• Separating logical memory from physical memory also allows files and memory to be
shared by several different processes through page sharing.

Fig: Shared Library using Virtual Memory

• Virtual memory is implemented using Demand Paging.


• Virtual address space: Every process has a virtual address space i.e used as the stack or
heap grows in size.
Fig: Virtual address space

DEMAND PAGING
• A demand paging is similar to paging system with swapping when we want to execute a
process we swap the process the in to memory otherwise it will not be loaded in to
memory.
• A swapper manipulates the entire processes, where as a pager manipulates individual
pages of the process.
▪ Bring a page into memory only when it is needed
▪ Less I/O needed
▪ Less memory needed
▪ Faster response
▪ More users
▪ Page is needed ⇒ reference to it
▪ invalid reference ⇒abort
▪ not-in-memory ⇒ bring to memory
▪ Lazy swapper– never swaps a page into memory unless page will be needed
▪ Swapper that deals with pages is a pager.
Fig: Transfer of a paged memory into continuous disk space

• Basic concept: Instead of swapping the whole process the pager swaps only the necessary pages
in to memory. Thus it avoids reading unused pages and decreases the swap time and amount of
physical memory needed.
• The valid-invalid bit scheme can be used to distinguish between the pages that are on the disk
and that are in memory.
▪ With each page table entry a valid–invalid bit is associated
▪ (v ⇒ in-memory, i⇒not-in-memory)
▪ Initially valid–invalid bit is set to ion all entries
▪ Example of a page table snapshot:

• During address translation, if valid–invalid bit in page table entry is I ⇒ page fault.
• If the bit is valid then the page is both legal and is in memory.
If the bit is invalid then either page is not valid or is valid but is currently on the disk. Marking a
page as invalid will have no effect if the processes never access to that page. Suppose if it access the
page which is marked invalid, causes a page fault trap. This may result in failure of OS to bring the
desired page in to memory.

Fig: Page Table when some pages are not in main memory
FILE CONCEPT
FILE:
• A file is a named collection of related information that is recorded on secondary storage.
• The information in a file is defined by its creator. Many different types of information
may be stored in a file source programs, object programs, executable programs,
numeric data, text, payroll records, graphic images, sound recordings, and so on.

A file has a certain defined which depends on its type.


• A text file is a sequence of characters organized into lines.
• A source file is a sequence of subroutines and functions, each of which is further
organized as declarations followed by executable statements.
• An object file is a sequence of bytes organized into blocks understandable by the
system's linker.
• An executable file is a series of code sections that the loader can bring into memory
and execute.
File Attributes
• A file is named, for the convenience of its human users, and is referred to by its
name. A name is usually a string of characters, such as example.c
• When a file is named, it becomes independent of the process, the user, and even the
system that created it.

A file's attributes vary from one operating system to another but typically consist of these:
• Name: The symbolic file name is the only information kept in human readable form.
• Identifier: This unique tag, usually a number, identifies the file within the file
system; it is the non-human-readable name for the file.
• Type: This information is needed for systems that support different types of files.
• Location: This information is a pointer to a device and to the location of the file on
that device.
• Size: The current size of the file (in bytes, words, or blocks) and possibly the maximum
allowed size are included in this attribute.
• Protection: Access-control information determines who can do reading, writing,
executing, and so on.
• Time, date, and user identification: This information may be kept for creation, last
modification, and last use. These data can be useful for protection, security, and usage
monitoring.

ACCESS METHODS
• Files store information. When it is used, this information must be accessed and read
into computer memory. The information in the file can be accessed in several ways.

• Some of the common methods are:

1. Sequential methods
• The simplest access method is sequential methods. Information in the file is
processed in order, one record after the other.
• Reads and writes make up the bulk of the operations on a file.
• A read operation (next-reads) reads the next portion of the file and
automatically advances a file pointer, which tracks the I/O location
• The write operation (write next) appends to the end of the file and advances to the
end of the newly written material.
• A file can be reset to the beginning and on some systems, a program may be able to
skip forward or backward n records for some integer n-perhaps only for n =1.

Figure: Sequential-access file.

1. Direct Access
• A file is made up of fixed length logical records that allow programs to read and write
records rapidly in no particular order.
• The direct-access method is based on a disk model of a file, since disks allow random
access to any file block. For direct access, the file is viewed as a numbered sequence
of blocks or records.
• Example: if we may read block 14, then read block 53, and then write block 7. There
are no restrictions on the order of reading or writing for a direct-access file.
• Direct-access files are of great use for immediate access to large amounts of
information such as Databases, where searching becomes easy and fast.
• For the direct-access method, the file operations must be modified to include the block
number as a parameter. Thus, we have read n, where n is the block number, rather than
read next, and ·write n rather than write next.
• An alternative approach is to retain read next and write next, as with sequential access,
and to add an operation position file to n, where n is the block number. Then, to affect
a read n, we would position to n and then read next.

Other Access Methods:


• Other access methods can be built on top of a direct-access method. These methods
generally involve the construction of an index for the file.
• The Index, is like an index in the back of a book contains pointers to the various
blocks. To find a record in the file, we first search the index and then use the pointer
to access the file directly and to find the desired record.

Figure: Example of index and relative files

DIRECTORY AND DISK STRUCTURE


• Files are stored on random-access storage devices, including hard disks, optical disks,
and solid state (memory-based) disks.
• A storage device can be used in its entirety for a file system. It can also be subdivided
for finer-grained control
• Disk can be subdivided into partitions. Each disks or partitions can be RAID protected
against failure.
• Partitions also known as minidisks or slices. Entity containing file system known as a
volume. Each volume that contains a file system must also contain information about
the files in the system. This information is kept in entries in a device directory or
volume table of contents.

Figure: A Typical File-system Organization

1. Single-level Directory
• The simplest directory structure is the single-level directory. All files are contained in
the same directory, which is easy to support and understand

• A single-level directory has significant limitations, when the number of files increases or
when the system has more than one user.

• As directory structure is single, uniqueness of file name has to be maintained, which is


difficult when there are multiple users.
• Even a single user on a single-level directory may find it difficult to remember the names
of all the files as the number of files increases.
• It is not uncommon for a user to have hundreds of files on one computer system and an
equal number of additional files on another system. Keeping track of so many files is a
daunting task.

2. Two-Level Directory
• In the two-level directory structure, each user has its own user file directory (UFD). The
UFDs have similar structures, but each lists only the files of a single user.
• When a user refers to a particular file, only his own UFD is searched. Different users may
have files with the same name, as long as all the file names within each UFD are unique.
• To create a file for a user, the operating system searches only that user's UFD to ascertain
whether another file of that name exists. To delete a file, the operating system confines
its search to the local UFD thus; it cannot accidentally delete another user's file that has
the same name.
• When a user job starts or a user logs in, the system's Master file directory (MFD) is
searched. The MFD is indexed by user name or account number, and each entry points
to the UFD for that user.

• Advantage:
▪ No file name-collision among different users.
▪ Efficient searching.
• Disadvantage
▪ Users are isolated from one another and can’t cooperate on the same task.

3. Tree Structured Directories


• A tree is the most common directory structure.
• The tree has a root directory, and every file in the system has a unique path name.
• A directory contains a set of files or subdirectories. A directory is simply another file, but
it is treated in a special way. All directories have the same internal format. One bit in each directory
entry defines the entry as a file (0) or as a subdirectory (1). Special system calls are used to create
and delete directories.
• Two types of path-names:
1. Absolute path-name: begins at the root.
2. Relative path-name: defines a path from the current directory.
1. Acyclic Graph Directories
• The common subdirectory should be shared. A shared directory or file will exist in the
file system in two or more places at once. A tree structure prohibits the sharing of files
or directories.
• An acyclic graph is a graph with no cycles. It allows directories to share subdirectories
and files.

• The same file or subdirectory may be in two different directories. The acyclic graph
is a natural generalization of the tree-structured directory scheme.

Two methods to implement shared-files (or subdirectories):


1. Create a new directory-entry called a link. A link is a pointer to another file (or
subdirectory).
2. Duplicate all information about shared-files in both sharing directories.

Two problems:
1. A file may have multiple absolute path-names.
2. Deletion may leave dangling-pointers to the non-existent file.

Solution to deletion problem:


1. Use back-pointers: Preserve the file until all references to it are deleted.
2. With symbolic links, remove only the link, not the file. If the file itself is
deleted, the link can be removed.

PROTECTION
• When information is stored in a computer system, we want to keep it safe from physical
damage (reliability) and improper access (protection).
• Reliability is generally provided by duplicate copies of files.
• For a small single-user system, we might provide protection by physically removing the
floppy disks and locking them in a desk drawer.
• File owner/creator should be able to control what can be done and by whom.

Types of Access
• Systems that do not permit access to the files of other users do not need protection. This is
too extreme, so controlled-access is needed.
• Following operations may be controlled:
1. Read: Read from the file.
2. Write: Write or rewrite the file.
3. Execute: Load the file into memory and execute it.
4. Append: Write new information at the end of the file.
5. Delete: Delete the file and tree its space for possible reuse.
6. List: List the name and attributes of the file.

Access Control
• Common approach to protection problem is to make access dependent on identity of user.
• Files can be associated with an ACL (access-control list) which specifies username and
types of access for each user.

Problems:

1. Constructing a list can be tedious.


2. Directory-entry now needs to be of variable-size, resulting in more complicated space
management.
Solution:
• These problems can be resolved by combining ACLs with an ‘owner, group, universe’
access control scheme
• To reduce the length of the ACL, many systems recognize 3 classifications of users:
1. Owner: The user who created the file is the owner.
2. Group: A set of users who are sharing the file and need similar access is a
group.
3. Universe: All other users in the system constitute the universe.

Other Protection Approaches


• A password can be associated with each file.
• Disadvantages:
1. The no. of passwords you need to remember may become large.
2. If only one password is used for all the files, then all files are accessible if it is
discovered.
3. Commonly, only one password is associated with all of the user’s files, so
protection is all-or nothing.

• In a multilevel directory-structure, we need to provide a mechanism for directory


protection.
• The directory operations that must be protected are different from the File-operations:
1. Control creation & deletion of files in a directory.
2. Control whether a user can determine the existence of a file in a directory.

FILE SYSTEM STRUCTURE


• Disks provide the bulk of secondary-storage on which a file-system is maintained.
The disk is a suitable medium for storing multiple files.
• This is because of two characteristics
1. A disk can be rewritten in place; it is possible to read a block from the disk, modify
the block, and write it back into the same place.
2. A disk can access directly any block of information it contains. Thus, it is simple to
access any file either sequentially or randomly, and switching from one file to another
requires only moving the read-write heads and waiting for the disk to rotate.
• To improve I/O efficiency, I/O transfers between memory and disk are performed in units
of blocks. Each block has one or more sectors. Depending on the disk drive, sector-size
varies from 32 bytes to 4096 bytes. The usual size is 512 bytes.
• File-systems provide efficient and convenient access to the disk by allowing data to be
stored, located, and retrieved easily
• Design problems of file-systems:
1. Defining how the file-system should look to the user.
2. Creating algorithms & data-structures to map the logical file-system onto the
physical secondary-storage devices.

Layered File Systems:


• The file-system itself is generally composed of many different levels. Every level in design
uses features of lower levels to create new features for use by higher levels.
File system provide efficient and convenient access to the disk by allowing data to be stored, located,
and retrieved easily .The file system itself is generally composed of many different levels. The structure
shown in Figure is an example of a layered design. Each level in the design uses the features of lower
levels to create new features for use by higher levels.

• The lowest level, the I/O control, consists of device drivers and interrupts handlers to transfer
information between the main memory and the disk system.

You might also like