0% found this document useful (0 votes)
41 views29 pages

Operating System Unit-1

Uploaded by

legendfortuneubg
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
41 views29 pages

Operating System Unit-1

Uploaded by

legendfortuneubg
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 29

OPERATING

SYSTEM
B.tech (CSE)
Notes

Dr. Aarti
Assistant Professor (CSE)
BVCOE, New Delhi
UNIT-I

Introduction: Introduction to OS. Operating system functions, Different types of O.S.: batch
process, multi-programmed, time-sharing, real-time, distributed, parallel.
System Structure: Computer system operation, I/O structure, storage structure, storage
hierarchy, different types of protections, operating system structure (simple, layered, virtual
machine), O/S services, system calls.

Operating System
 A program that acts as an intermediary between a user of a computer and the computer hardware.
 An operating System is a collection of system programs that together control the operations of a computer
system.
Some examples of operating systems are UNIX, Mach, MS-DOS, MS-Windows, Windows/NT, Chicago, OS/2,
MacOS, VMS, MVS, and VM. Operating system goals:
• Execute user programs and make solving user problems easier.
• Make the computer system convenient to use. • Use the computer hardware
in an efficient manner. Computer System Components
1. Hardware – provides basic computing resources (CPU, memory, I/O devices).
2. Operating system – controls and coordinates the use of the hardware among the various application programs
for the various users.
3. Applications programs – Define the ways in which the system resources are used to solve the computing
problems of the users (compilers, database systems, video games, business programs).
4. Users (people, machines, other computers).

Dr. Aarti, Assistant Professor (CSE)


Abstract View of System Components

Operating System Definitions


Resource allocator – manages and allocates resources.
Control program – controls the execution of user programs and operations of I/O devices .
Kernel – The one program running at all times (all else being application programs).
Components of OS: OS has two parts. (1)Kernel.(2)Shell.
(1) Kernel is an active part of an OS i.e., it is the part of OS running at all times. It is a programs which can interact
with the hardware. Ex: Device driver, dll files, system files etc.
(2) Shell is called as the command interpreter. It is a set of programs used to interact with the application
programs. It is responsible for execution of instructions given to OS (called commands).
Operating systems can be explored from two viewpoints: the user and the system.
User View: From the user’s point view, the OS is designed for one user to monopolize its resources, to maximize
the work that the user is performing and for ease of use.
System View: From the computer's point of view, an operating system is a control program that manages the
execution of user programs to prevent errors and improper use of the computer. It is concerned with the operation
and control of I/O devices.

Functions of Operating System:


Process Management
A process is a program in execution. A process needs certain resources, including CPU time, memory, files,
and I/O devices, to accomplish its task.
The operating system is responsible for the following activities in connection with process management. ✦
Process creation and deletion.
✦ process suspension and resumption.
✦ Provision of mechanisms for:
• process synchronization
• process communication
Main-Memory Management
Memory is a large array of words or bytes, each with its own address. It is a repository of quickly accessible
data shared by the CPU and I/O devices.

Dr. Aarti, Assistant Professor (CSE)


Main memory is a volatile storage device. It loses its contents in the case of system failure.
The operating system is responsible for the following activities in connections with memory management:
♦ Keep track of which parts of memory are currently being used and by whom.
♦ Decide which processes to load when memory space becomes available. ♦
Allocate and de-allocate memory space as needed.
File Management
A file is a collection of related information defined by its creator. Commonly, files represent programs
(both source and object forms) and data.
The operating system is responsible for the following activities in connections with file management: ✦
File creation and deletion.
✦ Directory creation and deletion.
✦ Support of primitives for manipulating files and directories.
✦ Mapping files onto secondary storage. ✦ File backup on stable (nonvolatile) storage media.
I/O System Management
The I/O system consists of:
✦ A buffer-caching system
✦ A general device-driver interface ✦ Drivers for specific hardware devices
Secondary-Storage Management
Since main memory (primary storage) is volatile and too small to accommodate all data and programs
permanently, the computer system must provide secondary storage to back up main memory.
Most modern computer systems use disks as the principle on-line storage medium, for both programs and
data. The operating system is responsible for the following activities in connection with disk management: ✦
Free space management
✦ Storage allocation ✦
Disk scheduling
Networking (Distributed Systems)
♦ A distributed system is a collection processors that do not share memory or a clock. Each processor has its
own local memory.
♦ The processors in the system are connected through a communication network.
♦ Communication takes place using a protocol.
♦ A distributed system provides user access to various system resources. ♦
Access to a shared resource allows: ✦ Computation speed-up
✦ Increased data availability ✦ Enhanced reliability
Protection System
♦ Protection refers to a mechanism for controlling access by programs, processes, or users to both system and
user resources. ♦ The protection mechanism must:
✦ distinguish between authorized and unauthorized usage.
✦ specify the controls to be imposed. ✦ provide a means of enforcement.
Command-Interpreter System
• Many commands are given to the operating system by control statements which deal with: ✦
process creation and management
✦ I/O handling
✦ secondary-storage management
✦ main-memory management

Dr. Aarti, Assistant Professor (CSE)


✦ file-system access
✦ protection
✦ networking
• The program that reads and interprets control statements is called variously:
✦ command-line interpreter
✦ shell (in UNIX)
• Its function is to get and execute the next command statement.
Operating-System Structures
• System Components
• Operating System Services
• System Calls
• System Programs
• System Structure
• Virtual Machines
• System Design and Implementation
• System Generation
Common System Components
• Process Management
• Main Memory Management
• File Management
• I/O System Management
• Secondary Management
• Networking
• Protection System
• Command-Interpreter System

1. Mainframe Systems
Reduce setup time by batching similar jobs Automatic job sequencing – automatically transfers control from one
job to another. First rudimentary operating system. Resident monitor
 initial control in monitor
 control transfers to job
 when job completes control transfers pack to monitor
2. Batch Processing Operating System:
This type of OS accepts more than one jobs and these jobs are batched/ grouped together according to their
similar requirements. This is done by computer operator. Whenever the computer becomes available, the batched
jobs are sent for execution and gradually the output is sent back to the user. It allowed only one program at a
time. This OS is responsible for scheduling the jobs according to priority and the resource required.
3. Multiprogramming Operating System:
 This type of OS is used to execute more than one jobs simultaneously by a single processor. it increases CPU
utilization by organizing jobs so that the CPU always has one job to execute.
 The concept of multiprogramming is described as follows:
➢ All the jobs that enter the system are stored in the job pool( in disc). The operating system loads a set
of jobs from job pool into main memory and begins to execute.

Dr. Aarti, Assistant Professor (CSE)


➢ During execution, the job may have to wait for some task, such as an I/O operation, to complete. In a
multiprogramming system, the operating system simply switches to another job and executes.
When that job needs to wait, the CPU is switched to another job, and so on.
➢ When the first job finishes waiting and it gets the CPU back.
➢ As long as at least one job needs to execute, the CPU is never idle. Multiprogramming operating
systems use the mechanism of job scheduling and CPU scheduling.
3. Time-Sharing/multitasking Operating Systems
Time sharing (or multitasking) OS is a logical extension of multiprogramming. It provides extra facilities such as:
 Faster switching between multiple jobs to make processing faster.
 Allows multiple users to share computer system simultaneously.
 The users can interact with each job while it is running.
These systems use a concept of virtual memory for effective utilization of memory space. Hence, in this OS, no
jobs are discarded. Each one is executed using virtual memory concept. It uses CPU scheduling, memory
management, disc management and security management. Examples: CTSS, MULTICS, CAL, UNIX etc.
4. Multiprocessor Operating Systems
Multiprocessor operating systems are also known as parallel OS or tightly coupled OS. Such operating
systems have more than one processor in close communication that sharing the computer bus, the clock and
sometimes memory and peripheral devices. It executes multiple jobs at same time and makes the processing faster.
Multiprocessor systems have three main advantages:
 Increased throughput: By increasing the number of processors, the system performs more work in less time.
The speed-up ratio with N processors is less than N.
 Economy of scale: Multiprocessor systems can save more money than multiple single-processor systems,
because they can share peripherals, mass storage, and power supplies.
 Increased reliability: If one processor fails to done its task, then each of the remaining processors must pick
up a share of the work of the failed processor. The failure of one processor will not halt the system, only slow
it down.

The ability to continue providing service proportional to the level of surviving hardware is called graceful
degradation. Systems designed for graceful degradation are called fault tolerant.

The multiprocessor operating systems are classified into two categories:


1. Symmetric multiprocessing system
2. Asymmetric multiprocessing system
 In symmetric multiprocessing system, each processor runs an identical copy of the operating system, and these
copies communicate with one another as needed.
 In asymmetric multiprocessing system, a processor is called master processor that controls other processors
called slave processor. Thus, it establishes master-slave relationship. The master processor schedules the jobs
and manages the memory for entire system.
5. Distributed Operating Systems
 In distributed system, the different machines are connected in a network and each machine has its own
processor and own local memory.
 In this system, the operating systems on all the machines work together to manage the collective network
resource.
 It can be classified into two categories:
1. Client-Server systems
2. Peer-to-Peer systems Advantages of distributed systems.

Dr. Aarti, Assistant Professor (CSE)


 Resources Sharing
 Computation speed up – load sharing
 Reliability
 Communications
 Requires networking infrastructure.
 Local area networks (LAN) or Wide area networks (WAN)
.
6. Desktop Systems/Personal Computer Systems
 The PC operating system is designed for maximizing user convenience and responsiveness. This system is
neither multi-user nor multitasking.
 These systems include PCs running Microsoft Windows and the Apple Macintosh. The MS-DOS operating
system from Microsoft has been superseded by multiple flavors of Microsoft Windows and IBM has upgraded
MS-DOS to the OS/2 multitasking system.
 The Apple Macintosh operating system has been ported to more advanced hardware, and now includes new
features such as virtual memory and multitasking.
7. Real-Time Operating Systems (RTOS)
 A real-time operating system (RTOS) is a multitasking operating system intended for applications with fixed
deadlines (real-time computing). Such applications include some small embedded systems, automobile engine
controllers, industrial robots, spacecraft, industrial control, and some large-scale computing systems.
 The real time operating system can be classified into two categories:
1. hard real time system and 2. soft real time system.
 A hard real-time system guarantees that critical tasks be completed on time. This goal requires that all delays
in the system be bounded, from the retrieval of stored data to the time that it takes the operating system to
finish any request made of it. Such time constraints dictate the facilities that are available in hard real-time
systems.
 A soft real-time system is a less restrictive type of real-time system. Here, a critical real-time task gets priority
over other tasks and retains that priority until it completes. Soft real time system can be mixed with other
types of systems. Due to less restriction, they are risky to use for industrial control and robotics.

Operating System Services


Following are the five services provided by operating systems to the convenience of the users.
1. Program Execution
The purpose of computer systems is to allow the user to execute programs. So the operating system
provides an environment where the user can conveniently run programs. Running a program involves the
allocating and deallocating memory, CPU scheduling in case of multiprocessing.
2. I/O Operations
Each program requires an input and produces output. This involves the use of I/O. So the operating
systems are providing I/O makes it convenient for the users to run programs.
3. File System Manipulation
The output of a program may need to be written into new files or input taken from some files. The
operating system provides this service.
4. Communications
The processes need to communicate with each other to exchange information during execution. It may be
between processes running on the same computer or running on the different computers. Communications can be
occur in two ways: (i) shared memory or (ii) message passing
5. Error Detection

Dr. Aarti, Assistant Professor (CSE)


An error is one part of the system may cause malfunctioning of the complete system. To avoid such a
situation operating system constantly monitors the system for detecting the errors. This relieves the user of the
worry of errors propagating to various part of the system and causing malfunctioning.
Following are the three services provided by operating systems for ensuring the efficient operation of the
system itself.
1. Resource allocation
When multiple users are logged on the system or multiple jobs are running at the same time, resources
must be allocated to each of them. Many different types of resources are managed by the operating system.
2. Accounting
The operating systems keep track of which users use how many and which kinds of computer resources.
This record keeping may be used for accounting (so that users can be billed) or simply for accumulating usage
statistics.

3. Protection
When several disjointed processes execute concurrently, it should not be possible for one process to
interfere with the others, or with the operating system itself. Protection involves ensuring that all access to system
resources is controlled. Security of the system from outsiders is also important. Such security starts with each user
having to authenticate him to the system, usually by means of a password, to be allowed access to the resources.

System Call:

➢ System calls provide an interface between the process and the operating system.
➢ System calls allow user-level processes to request some services from the operating system which process itself
is not allowed to do.
➢ For example, for I/O a process involves a system call telling the operating system to read or write particular
area and this request is satisfied by the operating system.

The following different types of system calls provided by an operating system:

• create process, terminate process


• get process attributes, set process attributes Process control
• wait for time
• end, abort
• wait event, signal event
• load, execute
• allocate and free memory

Dr. Aarti, Assistant Professor (CSE)


File management •
get time or
• create file, date, set time or
delete file date
• open, close •
get system
• read, write,
data, set system
reposition
data
• get file

attributes, get process,
set file file, or device
attributes attributes
Device management •
set process,
• request file, or device
device, attributes
release
device Communications
• read, write, reposition •
create, delete communication connection
• get device attributes, set device

attributes send, receive messages
• logically attach or detach devices •
transfer status information

attach or detach remote devices
An Operating System Layer
Information maintenance

OS/2 Layer Structure

Microkernel System Structure


Moves as much from the kernel into “user” space.
Communication takes place between user modules using message passing. Benefits:

Topperworld.in

easier to extend a microkernel

easier to port the operating system to new architectures

more reliable (less code is running in kernel mode)
• more secure Virtual
Machines
• A virtual machine takes the layered approach to its logical conclusion. It treats hardware and the operating
system kernel as though they were all hardware.
• A virtual machine provides an interface identical to the underlying bare hardware.
• The operating system creates the illusion of multiple processes, each executing on its own processor with
its own (virtual) memory.
• The resources of the physical computer are shared to create the virtual machines. ✦ CPU scheduling can
create the appearance that users have their own processor.
✦ Spooling and a file system can provide virtual card readers and virtual line printers.
✦ A normal user time-sharing terminal serves as the virtual machine operator’s console.
• System Models

Non-virtual Machine Virtual Machine


• Advantages/Disadvantages of Virtual Machines
• The virtual-machine concept provides complete protection of system resources since each virtual
• machine is isolated from all other virtual machines. This isolation, however, permits no direct sharing of
resources.
• A virtual-machine system is a perfect vehicle for operating-systems research and development. System
development is done on the virtual machine, instead of on a physical machine and so does not disrupt
normal system operation.
• The virtual machine concept is difficult to implement due to the effort required to provide an exact
duplicate to the underlying machine.
MEMORY ALLOCATION
The main memory must accommodate both the operating system and the various user
processes. We need to allocate different parts of the main memory in the most efficient way
possible.
The main memory is usually divided into two partitions: one for the resident operating
system, and one for the user processes. We may place the operating system in either low
memory or high memory. The major factor affecting this decision is the location of the interrupt
vector. Since the interrupt vector is often in low memory, programmers usually place the
operating system in low memory as well.
There are following two ways to allocate memory for user processes:
1. Contiguous memory allocation
2. Non contiguous memory allocation
1. Contiguous Memory Allocation
Here, all the processes are stored in contiguous memory locations. To load multiple processes
into memory, the Operating System must divide memory into multiple partitions for those
processes.
Hardware Support: The relocation-register scheme used to protect user processes from each
other, and from changing operating system code and data. Relocation register contains value
of smallest physical address of a partition and limit register contains range of that partition.
Each logical address must be less than the limit register.

(Hardware support for relocation and limit registers)

According to size of partitions, the multiple partition schemes are divided into two types:
i. Multiple fixed partition/ multiprogramming with fixed task(MFT)
ii. Multiple variable partition/ multiprogramming with variable task(MVT)
i. Multiple fixed partitions: Main memory is divided into a number of static partitions at
system generation time. In this case, any process whose size is less than or equal to the partition
size can be loaded into any available partition. If all partitions are full and no process is in the
Ready or Running state, the operating system can swap a process out of any of the partitions
and load in another process, so that there is some work for the processor.
Advantages: Simple to implement and little operating system overhead.
Disadvantage: * Inefficient use of memory due to internal fragmentation.
* Maximum number of active processes is fixed.
ii. Multiple variable partitions: With this partitioning, the partitions are of variable length
and number. When a process is brought into main memory, it is allocated exactly as much
memory as it requires and no more.
Advantages: No internal fragmentation and more efficient use of main memory.
Disadvantages: Inefficient use of processor due to the need for compaction to counter
external fragmentation. Partition Selection policy:
When the multiple memory holes (partitions) are large enough to contain a process, the
operating system must use an algorithm to select in which hole the process will be loaded. The
partition selection algorithm are as follows:
➢ First-fit: The OS looks at all sections of free memory. The process is allocated to the first
hole found that is big enough size than the size of process.
➢ Next Fit: The next fit search starts at the last hole allocated and The process is allocated
to the next hole found that is big enough size than the size of process.
➢ Best-fit: The Best Fit searches the entire list of holes to find the smallest hole that is big
enough size than the size of process.
➢ Worst-fit: The Worst Fit searches the entire list of holes to find the largest hole that is big
enough size than the size of process.
Fragmentation: The wasting of memory space is called fragmentation. There are two types of
fragmentation as follows:
1. External Fragmentation: The total memory space exists to satisfy a request, but it is not
contiguous. This wasted space not allocated to any partition is called external
fragmentation. The external fragmentation can be reduce by compaction. The goal is to
shuffle the memory contents to place all free memory together in one large block.
Compaction is possible only if relocation is dynamic, and is done at execution time.
2. Internal Fragmentation: The allocated memory may be slightly larger than requested
memory. The wasted space within a partition is called internal fragmentation. One method
to reduce internal fragmentation is to use partitions of different size.
2. Noncontiguous memory allocation
In noncontiguous memory allocation, it is allowed to store the processes in non contiguous
memory locations. There are different techniques used to load processes into memory, as
follows:
1. Paging 3. Virtual memory paging(Demand 2. Segmentation paging) etc.
PAGING
Main memory is divided into a number of equal-size blocks, are called frames. Each
process is divided into a number of equal-size block of the same length as frames, are called
Pages. A process is loaded by loading all of its pages into available frames (may not be
contiguous).
(Diagram of Paging hardware)

Process of Translation from logical to physical addresses


⇒ Every address generated by the CPU is divided into two parts: a page number (p) and a page
offset (d). The page number is used as an index into a page table.
⇒ The page table contains the base address of each page in physical memory. This base address
is combined with the page offset to define the physical memory address that is sent to the
memory unit.
⇒ If the size of logical-address space is 2m and a page size is 2n addressing units (bytes or
words), then the high-order (m – n) bits of a logical address designate the page number and
the n low-order bits designate the page offset. Thus, the logical address is as follows:

Where p is an index into the page table and d is the displacement within the page.
Example:
Consider a page size of 4 bytes and a
physical memory of 32 bytes (8 pages), we
show how the user's view of memory can
be mapped into physical memory. Logical
address 0 is page 0, offset 0. Indexing into
the page table, we find that page 0 is in
frame 5. Thus, logical address 0 maps to
physical address 20 (= (5 x 4) + 0). Logical
address 3 (page 0, offset 3) maps to
physical address 23 (= (5 x 4) + 3). Logical
address 4 is page 1, offset 0; according to
the page table, page 1 is mapped to frame
6. Thus, logical address 4 maps to physical
address 24 (= (6 x 4) + 0). Logical address
13 maps to physical address 9(= (2 x 4)+1).

Hardware Support for Paging:

Each operating system has its own methods for storing page tables. Most operating systems allocate
a page table for each process. A pointer to the page table is stored with the other register values (like
the instruction counter) in the process control block. When the dispatcher is told to start a process,
it must reload the user registers and define the correct hardware page table values from the stored
user page table. Implementation of Page Table
⇒ Generally, Page table is kept in main memory. The Page Table Base Register (PTBR) points to
the page table. And Page-table length register (PRLR) indicates size of the page table.
⇒ In this scheme every data/instruction access requires two memory accesses. One for the page
table and one for the data/instruction.
⇒ The two memory access problem can be solved by the use of a special fast-lookup hardware
cache called associative memory or translation look-aside buffers (TLBs).
Paging Hardware With TLB
The TLB is an associative and high-speed memory. Each entry in the TLB consists of two parts:
a key (or tag) and a value. The TLB is used with page tables in the following way.
 The TLB contains only a few of the page-table entries. When a logical address is
generated by the CPU, its page number is presented to the TLB.
 If the page number is found (known as a TLB Hit), its frame number is immediately
available and is used to access memory. It takes only one memory access.
 If the page number is not in the TLB (known as a TLB miss), a memory reference to the
page table must be made. When the frame number is obtained, we can use it to access
memory. It takes two memory accesses.
 In addition, it stores the page number and frame number to the TLB, so that they will be
found quickly on the next reference.
 If the TLB is already full of entries, the operating system must select one for replacement
by using replacement algorithm.

(Paging hardware with TLB)


The percentage of times that a particular page number is found in the TLB is called the
hit ratio. The effective access time (EAT) is obtained as follows:
EAT= HR x (TLBAT + MAT) + MR x (TLBAT + 2 x MAT)
Where HR: Hit Ratio, TLBAT: TLB access time, MAT: Memory access time, MR: Miss Ratio.

Memory protection in Paged Environment:


⇒ Memory protection in a paged environment is accomplished by protection bits that are
associated with each frame. These bits are kept in the page table.
⇒ One bit can define a page to be read-write or read-only. This protection bit can be checked to
verify that no writes are being made to a read-only page. An attempt to write to a readonly
page causes a hardware trap to the operating system (or memory-protection violation).
⇒ One more bit is attached to each entry in the page table: a valid-invalid bit. When this bit is
set to "valid," this value indicates that the associated page is in the process' logicaladdress
space, and is a legal (or valid) page. If the bit is set to "invalid," this value indicates that the
page is not in the process' logical-address space.
⇒ Illegal addresses are trapped by using the valid-invalid bit. The operating system sets this
bit for each page to allow or disallow accesses to that page.

(Valid (v) or invalid (i) bit in a page table)

Structure of the Page Table

There are different structures of page table described as follows:


1. Hierarchical Page table: When the number of pages is very high, then the page table takes
large amount of memory space. In such cases, we use multilevel paging scheme for reducing
size of page table. A simple technique is a two-level page table. Since the page table is paged,
the page number is further divided into parts: page number and page offset. Thus, a logical
address is as follows:

Where pi is an index into the outer page table, and p2 is the displacement within the page of
the outer page table.
Two-Level Page-Table Scheme:
Address translation scheme for a two-level paging architecture:

2. Hashed Page Tables: This scheme is applicable for address space larger than 32bits. In this
scheme, the virtual page number is hashed into a page table. This page table contains a chain
of elements hashing to the same location. Virtual page numbers are compared in this chain
searching for a match. If a match is found, the corresponding physical frame is extracted.
3. Inverted Page Table:
⇒ One entry for each real page of memory.
⇒ Entry consists of the virtual address of the page stored in that real memory location, with
information about the process that owns that page.
⇒ Decreases memory needed to store each page table, but increases time needed to search the
table when a page reference occurs.

Shared Pages
Shared code
➢ One copy of read-only (reentrant) code shared among processes (i.e., text editors,
compilers, window systems).
➢ Shared code must appear in same location in the logical address space of all processes.
Private code and data
➢ Each process keeps a separate copy of the code and data.
➢ The pages for the private code and data can appear anywhere in the logical address
space.

SEGMENTATION

Segmentation is a memory-management scheme that supports user view of memory. A


program is a collection of segments. A segment is a logical unit such as: main program,
procedure, function, method, object, local variables, global variables, common block, stack,
symbol table, arrays etc.
A logical-address space is a collection of segments. Each segment has a name and a
length. The user specifies each address by two quantities: a segment name/number and an
offset.
Hence, Logical address consists of a two tuple: <segment-number, offset>
Segment table maps two-dimensional physical addresses and each entry in table has:
base – contains the starting physical address where the segments reside in memory. limit
– specifies the length of the segment.
Segment-table base register (STBR) points to the segment table’s location in memory.
Segment-table length register (STLR) indicates number of segments used by a program.

(Diagram of Segmentation Hardware)

The segment number is used as an index into the segment table. The offset d of the logical
address must be between 0 and the segment limit. If it is not, we trap to the operating system
that logical addressing attempt beyond end of segment. If this offset is legal, it is added to the
segment base to produce the address in physical memory of the desired byte. Consider we
have five segments numbered from 0 through 4. The segments are stored in physical memory
as shown in figure. The segment table has a separate entry for each segment, giving start
address in physical memory (or base) and the length of that segment (or limit). For example,
segment 2 is 400 bytes long and begins at location 4300. Thus, a reference to byte 53 of segment
2 is mapped onto location 4300 + 53 = 4353.
(Example of segmentation)

VIRTUAL MEMORY
Virtual memory is a technique that allows the execution of processes that may not be
completely in memory. Only part of the program needs to be in memory for execution. It means
that Logical address space can be much larger than physical address space. Virtual memory
allows processes to easily share files and address spaces, and it provides an efficient mechanism
for process creation.
Virtual memory is the separation of user logical memory from physical memory. This
separation allows an extremely large virtual memory to be provided for programmers when
only a smaller physical memory is available. Virtual memory makes the task of programming
much easier, because the programmer no longer needs to worry about the amount of physical
memory available.
(Diagram showing virtual memory that is larger than physical memory)

Virtual memory can be implemented via:


Demand paging
Demand segmentation
DEMAND PAGING
A demand-paging system is similar to a paging system with swapping. Generally,
Processes reside on secondary memory (which is usually a disk). When we want to execute a
process, we swap it into memory. Rather than swapping the entire process into memory, it
swaps the required page. This can be done by a lazy swapper.
A lazy swapper never swaps a page into memory unless that page will be needed. A
swapper manipulates entire processes, whereas a pager is concerned with the individual pages
of a process.
Page transfer Method: When a process is to be swapped in, the pager guesses which pages will
be used before the process is swapped out again. Instead of swapping in a whole process, the
pager brings only those necessary pages into memory. Thus, it avoids reading into memory
pages that will not be used anyway, decreasing the swap time and the amount of physical
memory needed.
(Transfer of a paged memory to contiguous disk space) Page
Table:
➢ The valid-invalid bit scheme of Page table can be used for indicating which pages are
currently in memory.
➢ When this bit is set to "valid", this value indicates that the associated page is both legal and
in memory. If the bit is set to "invalid", this value indicates that the page either is not valid
or is valid but is currently on the disk.
➢ The page-table entry for a page that is brought into memory is set as usual, but the pagetable
entry for a page that is not currently in memory is simply marked invalid, or contains the
address of the page on disk.
(Page table when some pages are not in main memory)

When a page references an invalid page, then it is called Page Fault. It means that page is
not in main memory. The procedure for handling page fault is as follows:
1. We check an internal table for this process, to determine whether the reference was a
valid or invalid memory access.
2. If the reference was invalid, we terminate the process. If it was valid, but we have not
yet brought in that page in to memory.
3. We find a free frame (by taking one from the free-frame list).
4. We schedule a disk operation to read the desired page into the newly allocated frame.
5. When the disk read is complete, we modify the internal table kept with the process and
the page table to indicate that the page is now in memory.
6. We restart the instruction that was interrupted by the illegal address trap. The process
can now access the page as though it had always been in memory.

(Diagram of Steps in handling a page fault)


Note: The pages are copied into memory, only when they are required. This mechanism is
called Pure Demand Paging.
Performance of Demand Paging
Let p be the probability of a page fault (0< p < 1). Then the effective access time is
Effective access time = (1 - p) x memory access time + p x page fault time
In any case, we are faced with three major components of the page-fault service time:
1. Service the page-fault interrupt.
2. Read in the page.
3. Restart the process.

PAGE REPLACEMENT
The page replacement is a mechanism that loads a page from disc to memory when a
page of memory needs to be allocated. Page replacement can be described as follows:
1. Find the location of the desired page on the disk.
2. Find a free frame:
a. If there is a free frame, use it.
b. If there is no free frame, use a page-replacement algorithm to select a victim frame.
c. Write the victim page to the disk; change the page and frame tables accordingly.
3. Read the desired page into the (newly) free frame; change the page and frame tables.
4. Restart the user process.

(Diagram of Page replacement)

Page Replacement Algorithms: The page replacement algorithms decide which


memory pages to page out (swap out, write to disk) when a page of memory needs to be
allocated. We evaluate an algorithm by running it on a particular string of memory references
and computing the number of page faults. The string of memory references is called a reference
string. The different page replacement algorithms are described as follows:

1. First-In-First-Out (FIFO) Algorithm:

This is the simplest page replacement algorithm. In this algorithm, the operating system
keeps track of all pages in the memory in a queue, the oldest page is in the front of the
queue. When a page needs to be replaced page in the front of the queue is selected for
removal.
Example-1 Consider page reference string 1, 3, 0, 3, 5, 6, 3 with 3 page frames . Find the
number of page faults.

(FIFO page-replacement algorithm)

➢ Initially, all slots are empty, so when 1, 3, 0 came they are allocated to the empty
slots —> 3 Page Faults.
➢ when 3 comes, it is already in memory so —> 0 Page Faults.
➢ Then 5 comes, it is not available in memory so it replaces the oldest page slot i.e 1.
—>1 Page Fault.
➢ 6 comes, it is also not available in memory so it replaces the oldest page slot i.e 3 —
>1 Page Fault.
Finally, when 3 come it is not available so it replaces 0 1 page fault

2. Optimal Page Replacement algorithm:


In this algorithm, pages are replaced which would not be used for the longest duration of
time in the future.
Example-2: Consider the page references 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, with 4 page frame.
Find number of page fault.
Initially, all slots are empty, so when 7 0 1 2 are allocated to the empty slots —> 4 Page faults
0 is already there so —> 0 Page fault.
when 3 came it will take the place of 7 because it is not used for the longest duration of time in
the future.—>1 Page fault.
0 is already there so —> 0 Page fault..
4 will takes place of 1 —> 1 Page Fault.
Now for the further page reference string —> 0 Page fault because they are already available
in the memory.
Optimal page replacement is perfect, but not possible in practice as the operating system
cannot know future requests. The use of Optimal Page replacement is to set up a benchmark
so that other replacement algorithms can be analyzed against it.

3. LRU Page Replacement algorithm


In this algorithm, page will be replaced which is least recently used.

Example-3 Consider the page reference string 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2 with 4 page


frames. Find number of page faults.

Initially, all slots are empty, so when 7 0 1 2 are allocated to the empty slots —> 4 Page
faults
0 is already their so —> 0 Page fault.
when 3 came it will take the place of 7 because it is least recently used —>1 Page fault
0 is already in memory so —> 0 Page fault.
4 will takes place of 1 —> 1 Page Fault
Now for the further page reference string —> 0 Page fault because they are already
available in the memory.
4. LRU Approximation Page Replacement algorithm
In this algorithm, Reference bits are associated with each entry in the page table. Initially,
all bits are cleared (to 0) by the operating system. As a user process executes, the bit associated
with each page referenced is set (to 1) by the hardware. After some time, we can determine
which pages have been used and which have not been used by examining the reference bits.
This algorithm can be classified into different categories as follows:
i. Additional-Reference-Bits Algorithm: It can keep an 8-bit(1 byte) for each page
in a page table in memory. At regular intervals, a timer interrupt transfers control to the
operating system. The operating system shifts the reference bit for each page into the
highorder bit of its 8-bit, shifting the other bits right over 1 bit position, discarding the low-
order bit. These 8 bits shift registers contain the history of page use for the last eight time
periods.
If we interpret these 8-bits as unsigned integers, the page with the lowest number is the
LRU page, and it can be replaced.
ii. Second-Chance Algorithm: The basic algorithm of second-chance replacement is
a FIFO replacement algorithm. When a page has been selected, we inspect its reference bit. If
the value is 0, we proceed to replace this page. If the reference bit is set to 1, we give that page
a second chance and move on to select the next FIFO page. When a page gets a second chance,
its reference bit is cleared and its arrival time is reset to the current time. Thus, a page that is
given a second chance will not be replaced until all other pages are replaced.
5. Counting-Based Page Replacement
We could keep a counter of the number of references that have been made to each page,
and develop the following two schemes.
LFU page replacement algorithm: The least frequently used (LFU) page-
replacement algorithm requires that the page with the smallest count be replaced. The reason
for this selection is that an actively used page should have a large reference count. ii. MFU page-
replacement algorithm: The most frequently used (MFU) page replacement algorithm is
based on the argument that the page with the largest count be replaced.

ALLOCATION OF FRAMES

When a page fault occurs, there is a free frame available to store new page into a frame.
While the page swap is taking place, a replacement can be selected, which is written to the disk
as the user process continues to execute. The operating system allocate all its buffer and table
space from the free-frame list for new page.
Two major allocation Algorithm/schemes.
1. equal allocation
2. proportional allocation
1. Equal allocation: The easiest way to split m frames among n processes is to give everyone
an equal share, m/n frames. This scheme is called equal allocation.
2. proportional allocation: Here, it allocates available memory to each process according to
its size. Let the size of the virtual memory for process pi be si, and define S= ∑ Si Then, if the
total number of available frames is m, we allocate ai frames to process pi, where ai is
approximately ai = Si/ S x m.

Global Versus Local Allocation


We can classify page-replacement algorithms into two broad categories: global
replacement and local replacement.
Global replacement allows a process to select a replacement frame from the set of all
frames, even if that frame is currently allocated to some other process; one process can take a
frame from another.
Local replacement requires that each process select from only its own set of allocated
frames.
THRASHING
The system spends most of its time shuttling pages between main memory and secondary
memory due to frequent page faults. This behavior is known as thrashing.
A process is thrashing if it is spending more time paging than executing. This leads to:
low CPU utilization and the operating system thinks that it needs to increase the degree of
multiprogramming.

Thrashing is when the page fault and swapping happens very frequently at a higher rate,
and then the operating system has to spend more time swapping these pages. This state
in the operating system is known as thrashing. Because of thrashing, the CPU utilization
is going to be reduced or negligible.

(Thrashing)

You might also like