Operating System Unit-1
Operating System Unit-1
SYSTEM
B.tech (CSE)
Notes
Dr. Aarti
Assistant Professor (CSE)
BVCOE, New Delhi
UNIT-I
Introduction: Introduction to OS. Operating system functions, Different types of O.S.: batch
process, multi-programmed, time-sharing, real-time, distributed, parallel.
System Structure: Computer system operation, I/O structure, storage structure, storage
hierarchy, different types of protections, operating system structure (simple, layered, virtual
machine), O/S services, system calls.
Operating System
A program that acts as an intermediary between a user of a computer and the computer hardware.
An operating System is a collection of system programs that together control the operations of a computer
system.
Some examples of operating systems are UNIX, Mach, MS-DOS, MS-Windows, Windows/NT, Chicago, OS/2,
MacOS, VMS, MVS, and VM. Operating system goals:
• Execute user programs and make solving user problems easier.
• Make the computer system convenient to use. • Use the computer hardware
in an efficient manner. Computer System Components
1. Hardware – provides basic computing resources (CPU, memory, I/O devices).
2. Operating system – controls and coordinates the use of the hardware among the various application programs
for the various users.
3. Applications programs – Define the ways in which the system resources are used to solve the computing
problems of the users (compilers, database systems, video games, business programs).
4. Users (people, machines, other computers).
1. Mainframe Systems
Reduce setup time by batching similar jobs Automatic job sequencing – automatically transfers control from one
job to another. First rudimentary operating system. Resident monitor
initial control in monitor
control transfers to job
when job completes control transfers pack to monitor
2. Batch Processing Operating System:
This type of OS accepts more than one jobs and these jobs are batched/ grouped together according to their
similar requirements. This is done by computer operator. Whenever the computer becomes available, the batched
jobs are sent for execution and gradually the output is sent back to the user. It allowed only one program at a
time. This OS is responsible for scheduling the jobs according to priority and the resource required.
3. Multiprogramming Operating System:
This type of OS is used to execute more than one jobs simultaneously by a single processor. it increases CPU
utilization by organizing jobs so that the CPU always has one job to execute.
The concept of multiprogramming is described as follows:
➢ All the jobs that enter the system are stored in the job pool( in disc). The operating system loads a set
of jobs from job pool into main memory and begins to execute.
The ability to continue providing service proportional to the level of surviving hardware is called graceful
degradation. Systems designed for graceful degradation are called fault tolerant.
3. Protection
When several disjointed processes execute concurrently, it should not be possible for one process to
interfere with the others, or with the operating system itself. Protection involves ensuring that all access to system
resources is controlled. Security of the system from outsiders is also important. Such security starts with each user
having to authenticate him to the system, usually by means of a password, to be allowed access to the resources.
System Call:
➢ System calls provide an interface between the process and the operating system.
➢ System calls allow user-level processes to request some services from the operating system which process itself
is not allowed to do.
➢ For example, for I/O a process involves a system call telling the operating system to read or write particular
area and this request is satisfied by the operating system.
Topperworld.in
•
easier to extend a microkernel
•
easier to port the operating system to new architectures
•
more reliable (less code is running in kernel mode)
• more secure Virtual
Machines
• A virtual machine takes the layered approach to its logical conclusion. It treats hardware and the operating
system kernel as though they were all hardware.
• A virtual machine provides an interface identical to the underlying bare hardware.
• The operating system creates the illusion of multiple processes, each executing on its own processor with
its own (virtual) memory.
• The resources of the physical computer are shared to create the virtual machines. ✦ CPU scheduling can
create the appearance that users have their own processor.
✦ Spooling and a file system can provide virtual card readers and virtual line printers.
✦ A normal user time-sharing terminal serves as the virtual machine operator’s console.
• System Models
According to size of partitions, the multiple partition schemes are divided into two types:
i. Multiple fixed partition/ multiprogramming with fixed task(MFT)
ii. Multiple variable partition/ multiprogramming with variable task(MVT)
i. Multiple fixed partitions: Main memory is divided into a number of static partitions at
system generation time. In this case, any process whose size is less than or equal to the partition
size can be loaded into any available partition. If all partitions are full and no process is in the
Ready or Running state, the operating system can swap a process out of any of the partitions
and load in another process, so that there is some work for the processor.
Advantages: Simple to implement and little operating system overhead.
Disadvantage: * Inefficient use of memory due to internal fragmentation.
* Maximum number of active processes is fixed.
ii. Multiple variable partitions: With this partitioning, the partitions are of variable length
and number. When a process is brought into main memory, it is allocated exactly as much
memory as it requires and no more.
Advantages: No internal fragmentation and more efficient use of main memory.
Disadvantages: Inefficient use of processor due to the need for compaction to counter
external fragmentation. Partition Selection policy:
When the multiple memory holes (partitions) are large enough to contain a process, the
operating system must use an algorithm to select in which hole the process will be loaded. The
partition selection algorithm are as follows:
➢ First-fit: The OS looks at all sections of free memory. The process is allocated to the first
hole found that is big enough size than the size of process.
➢ Next Fit: The next fit search starts at the last hole allocated and The process is allocated
to the next hole found that is big enough size than the size of process.
➢ Best-fit: The Best Fit searches the entire list of holes to find the smallest hole that is big
enough size than the size of process.
➢ Worst-fit: The Worst Fit searches the entire list of holes to find the largest hole that is big
enough size than the size of process.
Fragmentation: The wasting of memory space is called fragmentation. There are two types of
fragmentation as follows:
1. External Fragmentation: The total memory space exists to satisfy a request, but it is not
contiguous. This wasted space not allocated to any partition is called external
fragmentation. The external fragmentation can be reduce by compaction. The goal is to
shuffle the memory contents to place all free memory together in one large block.
Compaction is possible only if relocation is dynamic, and is done at execution time.
2. Internal Fragmentation: The allocated memory may be slightly larger than requested
memory. The wasted space within a partition is called internal fragmentation. One method
to reduce internal fragmentation is to use partitions of different size.
2. Noncontiguous memory allocation
In noncontiguous memory allocation, it is allowed to store the processes in non contiguous
memory locations. There are different techniques used to load processes into memory, as
follows:
1. Paging 3. Virtual memory paging(Demand 2. Segmentation paging) etc.
PAGING
Main memory is divided into a number of equal-size blocks, are called frames. Each
process is divided into a number of equal-size block of the same length as frames, are called
Pages. A process is loaded by loading all of its pages into available frames (may not be
contiguous).
(Diagram of Paging hardware)
Where p is an index into the page table and d is the displacement within the page.
Example:
Consider a page size of 4 bytes and a
physical memory of 32 bytes (8 pages), we
show how the user's view of memory can
be mapped into physical memory. Logical
address 0 is page 0, offset 0. Indexing into
the page table, we find that page 0 is in
frame 5. Thus, logical address 0 maps to
physical address 20 (= (5 x 4) + 0). Logical
address 3 (page 0, offset 3) maps to
physical address 23 (= (5 x 4) + 3). Logical
address 4 is page 1, offset 0; according to
the page table, page 1 is mapped to frame
6. Thus, logical address 4 maps to physical
address 24 (= (6 x 4) + 0). Logical address
13 maps to physical address 9(= (2 x 4)+1).
Each operating system has its own methods for storing page tables. Most operating systems allocate
a page table for each process. A pointer to the page table is stored with the other register values (like
the instruction counter) in the process control block. When the dispatcher is told to start a process,
it must reload the user registers and define the correct hardware page table values from the stored
user page table. Implementation of Page Table
⇒ Generally, Page table is kept in main memory. The Page Table Base Register (PTBR) points to
the page table. And Page-table length register (PRLR) indicates size of the page table.
⇒ In this scheme every data/instruction access requires two memory accesses. One for the page
table and one for the data/instruction.
⇒ The two memory access problem can be solved by the use of a special fast-lookup hardware
cache called associative memory or translation look-aside buffers (TLBs).
Paging Hardware With TLB
The TLB is an associative and high-speed memory. Each entry in the TLB consists of two parts:
a key (or tag) and a value. The TLB is used with page tables in the following way.
The TLB contains only a few of the page-table entries. When a logical address is
generated by the CPU, its page number is presented to the TLB.
If the page number is found (known as a TLB Hit), its frame number is immediately
available and is used to access memory. It takes only one memory access.
If the page number is not in the TLB (known as a TLB miss), a memory reference to the
page table must be made. When the frame number is obtained, we can use it to access
memory. It takes two memory accesses.
In addition, it stores the page number and frame number to the TLB, so that they will be
found quickly on the next reference.
If the TLB is already full of entries, the operating system must select one for replacement
by using replacement algorithm.
Where pi is an index into the outer page table, and p2 is the displacement within the page of
the outer page table.
Two-Level Page-Table Scheme:
Address translation scheme for a two-level paging architecture:
2. Hashed Page Tables: This scheme is applicable for address space larger than 32bits. In this
scheme, the virtual page number is hashed into a page table. This page table contains a chain
of elements hashing to the same location. Virtual page numbers are compared in this chain
searching for a match. If a match is found, the corresponding physical frame is extracted.
3. Inverted Page Table:
⇒ One entry for each real page of memory.
⇒ Entry consists of the virtual address of the page stored in that real memory location, with
information about the process that owns that page.
⇒ Decreases memory needed to store each page table, but increases time needed to search the
table when a page reference occurs.
Shared Pages
Shared code
➢ One copy of read-only (reentrant) code shared among processes (i.e., text editors,
compilers, window systems).
➢ Shared code must appear in same location in the logical address space of all processes.
Private code and data
➢ Each process keeps a separate copy of the code and data.
➢ The pages for the private code and data can appear anywhere in the logical address
space.
SEGMENTATION
The segment number is used as an index into the segment table. The offset d of the logical
address must be between 0 and the segment limit. If it is not, we trap to the operating system
that logical addressing attempt beyond end of segment. If this offset is legal, it is added to the
segment base to produce the address in physical memory of the desired byte. Consider we
have five segments numbered from 0 through 4. The segments are stored in physical memory
as shown in figure. The segment table has a separate entry for each segment, giving start
address in physical memory (or base) and the length of that segment (or limit). For example,
segment 2 is 400 bytes long and begins at location 4300. Thus, a reference to byte 53 of segment
2 is mapped onto location 4300 + 53 = 4353.
(Example of segmentation)
VIRTUAL MEMORY
Virtual memory is a technique that allows the execution of processes that may not be
completely in memory. Only part of the program needs to be in memory for execution. It means
that Logical address space can be much larger than physical address space. Virtual memory
allows processes to easily share files and address spaces, and it provides an efficient mechanism
for process creation.
Virtual memory is the separation of user logical memory from physical memory. This
separation allows an extremely large virtual memory to be provided for programmers when
only a smaller physical memory is available. Virtual memory makes the task of programming
much easier, because the programmer no longer needs to worry about the amount of physical
memory available.
(Diagram showing virtual memory that is larger than physical memory)
When a page references an invalid page, then it is called Page Fault. It means that page is
not in main memory. The procedure for handling page fault is as follows:
1. We check an internal table for this process, to determine whether the reference was a
valid or invalid memory access.
2. If the reference was invalid, we terminate the process. If it was valid, but we have not
yet brought in that page in to memory.
3. We find a free frame (by taking one from the free-frame list).
4. We schedule a disk operation to read the desired page into the newly allocated frame.
5. When the disk read is complete, we modify the internal table kept with the process and
the page table to indicate that the page is now in memory.
6. We restart the instruction that was interrupted by the illegal address trap. The process
can now access the page as though it had always been in memory.
PAGE REPLACEMENT
The page replacement is a mechanism that loads a page from disc to memory when a
page of memory needs to be allocated. Page replacement can be described as follows:
1. Find the location of the desired page on the disk.
2. Find a free frame:
a. If there is a free frame, use it.
b. If there is no free frame, use a page-replacement algorithm to select a victim frame.
c. Write the victim page to the disk; change the page and frame tables accordingly.
3. Read the desired page into the (newly) free frame; change the page and frame tables.
4. Restart the user process.
This is the simplest page replacement algorithm. In this algorithm, the operating system
keeps track of all pages in the memory in a queue, the oldest page is in the front of the
queue. When a page needs to be replaced page in the front of the queue is selected for
removal.
Example-1 Consider page reference string 1, 3, 0, 3, 5, 6, 3 with 3 page frames . Find the
number of page faults.
➢ Initially, all slots are empty, so when 1, 3, 0 came they are allocated to the empty
slots —> 3 Page Faults.
➢ when 3 comes, it is already in memory so —> 0 Page Faults.
➢ Then 5 comes, it is not available in memory so it replaces the oldest page slot i.e 1.
—>1 Page Fault.
➢ 6 comes, it is also not available in memory so it replaces the oldest page slot i.e 3 —
>1 Page Fault.
Finally, when 3 come it is not available so it replaces 0 1 page fault
Initially, all slots are empty, so when 7 0 1 2 are allocated to the empty slots —> 4 Page
faults
0 is already their so —> 0 Page fault.
when 3 came it will take the place of 7 because it is least recently used —>1 Page fault
0 is already in memory so —> 0 Page fault.
4 will takes place of 1 —> 1 Page Fault
Now for the further page reference string —> 0 Page fault because they are already
available in the memory.
4. LRU Approximation Page Replacement algorithm
In this algorithm, Reference bits are associated with each entry in the page table. Initially,
all bits are cleared (to 0) by the operating system. As a user process executes, the bit associated
with each page referenced is set (to 1) by the hardware. After some time, we can determine
which pages have been used and which have not been used by examining the reference bits.
This algorithm can be classified into different categories as follows:
i. Additional-Reference-Bits Algorithm: It can keep an 8-bit(1 byte) for each page
in a page table in memory. At regular intervals, a timer interrupt transfers control to the
operating system. The operating system shifts the reference bit for each page into the
highorder bit of its 8-bit, shifting the other bits right over 1 bit position, discarding the low-
order bit. These 8 bits shift registers contain the history of page use for the last eight time
periods.
If we interpret these 8-bits as unsigned integers, the page with the lowest number is the
LRU page, and it can be replaced.
ii. Second-Chance Algorithm: The basic algorithm of second-chance replacement is
a FIFO replacement algorithm. When a page has been selected, we inspect its reference bit. If
the value is 0, we proceed to replace this page. If the reference bit is set to 1, we give that page
a second chance and move on to select the next FIFO page. When a page gets a second chance,
its reference bit is cleared and its arrival time is reset to the current time. Thus, a page that is
given a second chance will not be replaced until all other pages are replaced.
5. Counting-Based Page Replacement
We could keep a counter of the number of references that have been made to each page,
and develop the following two schemes.
LFU page replacement algorithm: The least frequently used (LFU) page-
replacement algorithm requires that the page with the smallest count be replaced. The reason
for this selection is that an actively used page should have a large reference count. ii. MFU page-
replacement algorithm: The most frequently used (MFU) page replacement algorithm is
based on the argument that the page with the largest count be replaced.
ALLOCATION OF FRAMES
When a page fault occurs, there is a free frame available to store new page into a frame.
While the page swap is taking place, a replacement can be selected, which is written to the disk
as the user process continues to execute. The operating system allocate all its buffer and table
space from the free-frame list for new page.
Two major allocation Algorithm/schemes.
1. equal allocation
2. proportional allocation
1. Equal allocation: The easiest way to split m frames among n processes is to give everyone
an equal share, m/n frames. This scheme is called equal allocation.
2. proportional allocation: Here, it allocates available memory to each process according to
its size. Let the size of the virtual memory for process pi be si, and define S= ∑ Si Then, if the
total number of available frames is m, we allocate ai frames to process pi, where ai is
approximately ai = Si/ S x m.
Thrashing is when the page fault and swapping happens very frequently at a higher rate,
and then the operating system has to spend more time swapping these pages. This state
in the operating system is known as thrashing. Because of thrashing, the CPU utilization
is going to be reduced or negligible.
(Thrashing)