OS Unit-3
OS Unit-3
Every program to be executed has to be executed must be in memory. The instruction must be
fetched from memory before it is executed.
In multi-tasking OS memory management is complex, because as processes are swapped in and out of
the CPU, their code and data must be swapped in and out of memory.
Swapping
• The main memory must accommodate both the operating system and the various user
processes. Therefore we need to allocate the parts of the main memory in the most efficient
way possible.
• Memory is usually divided into 2 partitions: One for the resident OS. One for the user
processes.
• Each process is contained in a single contiguous section of memory.
1. Memory Allocation
1. Fixed-sized Partitioning
2. Variable-sized Partitioning
• The OS keeps a table indicating which parts of memory are available and which parts are
occupied.
• A hole is a block of available memory. Normally, memory contains a set of holes of
various sizes.
• Initially, all memory is available for user-processes and considered one large hole.
• When a process arrives, the process is allocated memory from a large hole.
• If we find the hole, we allocate only as much memory as is needed and keep the
remaining memory available to satisfy future requests.
Three strategies used to select a free hole from the set of available holes:
1. First Fit: Allocate the first hole that is big enough. Searching can start either at the
beginning of the set of holes or at the location where the previous first-fit search ended.
2. Best Fit: Allocate the smallest hole that is big enough. We must search the entire list,
unless the list is ordered by size. This strategy produces the smallest leftover hole.
3. Worst Fit: Allocate the largest hole. Again, we must search the entire list, unless it is
sorted by size. This strategy produces the largest leftover hole.
First-fit and best fit are better than worst fit in terms of decreasing time and storage utilization.
2. Fragmentation
1. Internal Fragmentation
• The general approach is to break the physical-memory into fixed-sized blocks and
allocate memory in units based on block size.
• The allocated-memory to a process may be slightly larger than the requested-memory.
• The difference between requested-memory and allocated-memory is called internal
fragmentation i.e. Unused memory that is internal to a partition.
2. External Fragmentation
• External fragmentation occurs when there is enough total memory-space to satisfy a request
but the available-spaces are not contiguous. (i.e. storage is fragmented into a large number of
small holes).
• Both the first-fit and best-fit strategies for memory-allocation suffer from external
fragmentation.
• Statistical analysis of first-fit reveals that given N allocated blocks, another 0.5 N blocks will
be lost to fragmentation. This property is known as the 50-percent rule.
3. Memory Mapping and Protection
• Memory-protection means protecting OS from user-process and protecting user-
processes from one another.
• Memory-protection is done using
1. Relocation-register: contains the value of the smallest physical-address.
2. Limit-register: contains the range of logical-addresses.
• Each logical-address must be less than the limit-register.
• The MMU maps the logical-address dynamically by adding the value in the relocation-
register. This mapped-address is sent to memory
• When the CPU scheduler selects a process for execution, the dispatcher loads the
relocation and limit-registers with the correct values.
• Because every address generated by the CPU is checked against these registers, we can
protect the OS from the running-process.
• The relocation-register scheme provides an effective way to allow the OS size to change
dynamically.
Transient OS code: Code that comes & goes as needed to save memory-space and overhead for
unnecessary swapping.
Figure: Hardware support for relocation and limit-registers
Segmentation
1. Hierarchical Paging
• Problem: Most computers support a large logical-address space (232 to 264). In these
systems, the page-table itself becomes excessively large.
• Solution: Divide the page-table into smaller pieces.
Advantage:
1. Decreases memory needed to store each page-table
Disadvantages:
2. Increases amount of time needed to search table when a page reference occurs.
3. Difficulty implementing shared-memory
Figure: Inverted page-table
Advantage:
2. Decreases memory needed to store each page-table
Disadvantages:
4. Increases amount of time needed to search table when a page reference occurs.
Difficulty implementing shared-memory
VIRTUAL MEMORYMANAGEMENT
• Virtual memory is a technique that allows for the execution of partially loaded process.
• Advantages:
▪ A program will not be limited by the amount of physical memory that is available
user can able to write in to large virtual space.
▪ Since each program takes less amount of physical memory, more than one program
could be run at the same time which can increase the throughput and CPU
utilization.
▪ Less i/o operation is needed to swap or load user program in to memory. So each
user program could run faster.
Fig: Virtual memory that is larger than physical memory.
• Virtual memory is the separation of users logical memory from physical memory. This
separation allows an extremely large virtual memory to be provided when these is less
physical memory.
• Separating logical memory from physical memory also allows files and memory to be
shared by several different processes through page sharing.
DEMAND PAGING
• A demand paging is similar to paging system with swapping when we want to execute a
process we swap the process the in to memory otherwise it will not be loaded in to
memory.
• A swapper manipulates the entire processes, where as a pager manipulates individual
pages of the process.
▪ Bring a page into memory only when it is needed
▪ Less I/O needed
▪ Less memory needed
▪ Faster response
▪ More users
▪ Page is needed ⇒ reference to it
▪ invalid reference ⇒abort
▪ not-in-memory ⇒ bring to memory
▪ Lazy swapper– never swaps a page into memory unless page will be needed
▪ Swapper that deals with pages is a pager.
Fig: Transfer of a paged memory into continuous disk space
• Basic concept: Instead of swapping the whole process the pager swaps only the necessary pages
in to memory. Thus it avoids reading unused pages and decreases the swap time and amount of
physical memory needed.
• The valid-invalid bit scheme can be used to distinguish between the pages that are on the disk
and that are in memory.
▪ With each page table entry a valid–invalid bit is associated
▪ (v ⇒ in-memory, i⇒not-in-memory)
▪ Initially valid–invalid bit is set to ion all entries
▪ Example of a page table snapshot:
• During address translation, if valid–invalid bit in page table entry is I ⇒ page fault.
• If the bit is valid then the page is both legal and is in memory.
If the bit is invalid then either page is not valid or is valid but is currently on the disk. Marking a
page as invalid will have no effect if the processes never access to that page. Suppose if it access the
page which is marked invalid, causes a page fault trap. This may result in failure of OS to bring the
desired page in to memory.
Fig: Page Table when some pages are not in main memory
FILE CONCEPT
FILE:
• A file is a named collection of related information that is recorded on secondary storage.
• The information in a file is defined by its creator. Many different types of information
may be stored in a file source programs, object programs, executable programs,
numeric data, text, payroll records, graphic images, sound recordings, and so on.
A file's attributes vary from one operating system to another but typically consist of these:
• Name: The symbolic file name is the only information kept in human readable form.
• Identifier: This unique tag, usually a number, identifies the file within the file
system; it is the non-human-readable name for the file.
• Type: This information is needed for systems that support different types of files.
• Location: This information is a pointer to a device and to the location of the file on
that device.
• Size: The current size of the file (in bytes, words, or blocks) and possibly the maximum
allowed size are included in this attribute.
• Protection: Access-control information determines who can do reading, writing,
executing, and so on.
• Time, date, and user identification: This information may be kept for creation, last
modification, and last use. These data can be useful for protection, security, and usage
monitoring.
ACCESS METHODS
• Files store information. When it is used, this information must be accessed and read
into computer memory. The information in the file can be accessed in several ways.
1. Sequential methods
• The simplest access method is sequential methods. Information in the file is
processed in order, one record after the other.
• Reads and writes make up the bulk of the operations on a file.
• A read operation (next-reads) reads the next portion of the file and
automatically advances a file pointer, which tracks the I/O location
• The write operation (write next) appends to the end of the file and advances to the
end of the newly written material.
• A file can be reset to the beginning and on some systems, a program may be able to
skip forward or backward n records for some integer n-perhaps only for n =1.
1. Direct Access
• A file is made up of fixed length logical records that allow programs to read and write
records rapidly in no particular order.
• The direct-access method is based on a disk model of a file, since disks allow random
access to any file block. For direct access, the file is viewed as a numbered sequence
of blocks or records.
• Example: if we may read block 14, then read block 53, and then write block 7. There
are no restrictions on the order of reading or writing for a direct-access file.
• Direct-access files are of great use for immediate access to large amounts of
information such as Databases, where searching becomes easy and fast.
• For the direct-access method, the file operations must be modified to include the block
number as a parameter. Thus, we have read n, where n is the block number, rather than
read next, and ·write n rather than write next.
• An alternative approach is to retain read next and write next, as with sequential access,
and to add an operation position file to n, where n is the block number. Then, to affect
a read n, we would position to n and then read next.
1. Single-level Directory
• The simplest directory structure is the single-level directory. All files are contained in
the same directory, which is easy to support and understand
• A single-level directory has significant limitations, when the number of files increases or
when the system has more than one user.
2. Two-Level Directory
• In the two-level directory structure, each user has its own user file directory (UFD). The
UFDs have similar structures, but each lists only the files of a single user.
• When a user refers to a particular file, only his own UFD is searched. Different users may
have files with the same name, as long as all the file names within each UFD are unique.
• To create a file for a user, the operating system searches only that user's UFD to ascertain
whether another file of that name exists. To delete a file, the operating system confines
its search to the local UFD thus; it cannot accidentally delete another user's file that has
the same name.
• When a user job starts or a user logs in, the system's Master file directory (MFD) is
searched. The MFD is indexed by user name or account number, and each entry points
to the UFD for that user.
• Advantage:
▪ No file name-collision among different users.
▪ Efficient searching.
• Disadvantage
▪ Users are isolated from one another and can’t cooperate on the same task.
• The same file or subdirectory may be in two different directories. The acyclic graph
is a natural generalization of the tree-structured directory scheme.
Two problems:
1. A file may have multiple absolute path-names.
2. Deletion may leave dangling-pointers to the non-existent file.
PROTECTION
• When information is stored in a computer system, we want to keep it safe from physical
damage (reliability) and improper access (protection).
• Reliability is generally provided by duplicate copies of files.
• For a small single-user system, we might provide protection by physically removing the
floppy disks and locking them in a desk drawer.
• File owner/creator should be able to control what can be done and by whom.
Types of Access
• Systems that do not permit access to the files of other users do not need protection. This is
too extreme, so controlled-access is needed.
• Following operations may be controlled:
1. Read: Read from the file.
2. Write: Write or rewrite the file.
3. Execute: Load the file into memory and execute it.
4. Append: Write new information at the end of the file.
5. Delete: Delete the file and tree its space for possible reuse.
6. List: List the name and attributes of the file.
Access Control
• Common approach to protection problem is to make access dependent on identity of user.
• Files can be associated with an ACL (access-control list) which specifies username and
types of access for each user.
Problems:
• The lowest level, the I/O control, consists of device drivers and interrupts handlers to transfer
information between the main memory and the disk system.