0% found this document useful (0 votes)
55 views157 pages

Operating System (Digital Content) 2025

The document provides a comprehensive overview of operating systems, detailing their history, functions, and components. It discusses the evolution of operating systems from the 1940s to the fourth generation, highlighting key features and advancements in each era. Additionally, it outlines the essential roles of operating systems in managing hardware resources, ensuring security, and facilitating user interaction with computer systems.

Uploaded by

njaugeorge42
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
55 views157 pages

Operating System (Digital Content) 2025

The document provides a comprehensive overview of operating systems, detailing their history, functions, and components. It discusses the evolution of operating systems from the 1940s to the fourth generation, highlighting key features and advancements in each era. Additionally, it outlines the essential roles of operating systems in managing hardware resources, ensuring security, and facilitating user interaction with computer systems.

Uploaded by

njaugeorge42
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 157

OPERATING SYSTEMS

OPERATING
SYSTEMS

Carolyn 1
OPERATING SYSTEMS

Table of Contents
CHAPTER ONE..............................................................................................................................6
INTRODUCTION TO OPERATING SYSTEMS..........................................................................6
History of Operating Systems...................................................................................................8
The 1940's - First Generations..............................................................................................8
The 1950's - Second Generation...........................................................................................9
The 1960's - Third Generation.............................................................................................9
Fourth Generation...............................................................................................................10
Features common to DOS and OS JCL.................................................................................25
Jobs, steps and procedures..................................................................................................25
Layered Architecture of Operating System....................................................................................29
Advantages of Layered architecture:..................................................................................................30
Essential Components in a Microkernel....................................................................................31
Performance of a Microkernel System......................................................................................32
Benefits of Microkernels...........................................................................................................32
Kernel Level Threads................................................................................................................36
Advantages...........................................................................................................................37
Disadvantages.......................................................................................................................37
Structure of the Process Control Block.................................................................................39
Process State..........................................................................................................................39
Process Number.....................................................................................................................39
Program Counter....................................................................................................................39
Registers................................................................................................................................39
List of Open Files..................................................................................................................39
CPU Scheduling Information.............................................................................................40
I/O Status Information........................................................................................................40
Accounting information......................................................................................................40
Location of the Process Control Block...............................................................................40
New............................................................................................................................................41
Ready........................................................................................................................................41
Ready Suspended.....................................................................................................................41
Running....................................................................................................................................41
Blocked......................................................................................................................................41

Carolyn 2
OPERATING SYSTEMS

Blocked Suspended..................................................................................................................42
Terminated...............................................................................................................................42
Difference Between Process And Program............................................................................43
Differences Between Semaphore and Monitor......................................................................45
Long Term Scheduler..............................................................................................................47
Short Term Scheduler.............................................................................................................47
Medium Term Scheduler........................................................................................................47
Comparison among Scheduler................................................................................................49
Context Switch...........................................................................................................................49
First Come First Serve (FCFS)..................................................................................................50
Shortest Job Next (SJN).............................................................................................................51
Priority Based Scheduling.........................................................................................................53
Shortest Remaining Time..........................................................................................................54
Round Robin Scheduling...........................................................................................................55
Multiple-Level Queues Scheduling...........................................................................................55
Goals of I/O Software............................................................................................................106
Types of Device Drivers.............................................................................................................107
Classification of Drivers According to Functionality.....................................................107
Device-Independent I/O Software........................................................................................110
User-Space I/O Software.......................................................................................................110
Kernel I/O Subsystem............................................................................................................110
The Physical Parts of a Disk.................................................................................................114
Figure 1-1 A Disk...............................................................................................................114
Magnetic Surface................................................................................................................114
Bits.......................................................................................................................................115
Byte......................................................................................................................................115
Block, Sector.......................................................................................................................115
Cluster.................................................................................................................................116
Tracks..................................................................................................................................116
Platters.................................................................................................................................117
Cylinder...............................................................................................................................117
Head....................................................................................................................................117
Arms....................................................................................................................................118
Spindle.................................................................................................................................118

Carolyn 3
OPERATING SYSTEMS

Drive....................................................................................................................................119
Cable....................................................................................................................................119
Controller............................................................................................................................119
Intelligent Disk Controller Functions..................................................................................119
Seek Ordering.....................................................................................................................120
Data Caching......................................................................................................................120
Computer Terminal...................................................................................................................132
1. Text terminals..........................................................................................................132
2. Graphical terminals................................................................................................132
Modes......................................................................................................................................133
Serial Lines.............................................................................................................................133
Properties of a File System....................................................................................................138
File structure..........................................................................................................................139
File Attributes........................................................................................................................139
File Type.................................................................................................................................140
Character Special File..........................................................................................................140
Ordinary files.......................................................................................................................140
Directory Files.....................................................................................................................140
Special Files.........................................................................................................................140
Functions of File.....................................................................................................................140
Commonly used terms in File systems.................................................................................140
Field:....................................................................................................................................141
DATABASE:.......................................................................................................................141
FILES:..................................................................................................................................141
RECORD:............................................................................................................................141
File Access Methods...............................................................................................................141
Sequential Access................................................................................................................141
Random Access...................................................................................................................141
Sequential Access................................................................................................................142
Space Allocation.....................................................................................................................142
Contiguous Allocation.........................................................................................................142
Linked Allocation................................................................................................................142
Indexed Allocation...............................................................................................................142
File Directories.......................................................................................................................143

Carolyn 4
OPERATING SYSTEMS

File types- name, extension...................................................................................................143


Summary:.............................................................................................................................144
File Structure...........................................................................................................................145
File Type..................................................................................................................................145
Ordinary files.....................................................................................................................145
Directory files.....................................................................................................................145
Special files.........................................................................................................................145
File Access Mechanisms..........................................................................................................145
Sequential access................................................................................................................146
Direct/Random access.......................................................................................................146
Indexed sequential access..................................................................................................146
Space Allocation......................................................................................................................146
Contiguous Allocation.......................................................................................................146
Linked Allocation...............................................................................................................147
Indexed Allocation.............................................................................................................147
FAT File System.....................................................................................................................147
Features of FAT File System...............................................................................................147
FAT32 File System.................................................................................................................147
Features of FAT32 File System...........................................................................................148
NTFS File System..................................................................................................................148
Features of NTFS File System.............................................................................................148

Carolyn 5
OPERATING SYSTEMS

CHAPTER ONE
INTRODUCTION TO OPERATING SYSTEMS

 An Operating system is a set of programs that is used to manage the basic hardware
resources of a computer.

 This is the main program that controls the execution of user applications, and enables the
user to access the hardware & software resources of the computer.

When the computer is switched on, the OS programs run & check to ensure that all parts of the
computer are functioning properly.

Operating system’s platform

In a data processing environment, the user sees a computer as a group of application programs
that enable him/her to accomplish specific tasks.
However, application programs do not use the hardware devices directly. They send messages
through the operating system which has the capability to give instructions to the hardware to
perform a particular task.

The user communicates his/her intentions to the OS through the use of a special instruction set
known as Commands.
User
(Runs Application programs)

Application software
(Send users requests to the OS)

Operating system
(Receives & controls execution of
Application programs)

Hardware
(Receives & executes OS commands)

Carolyn 6
OPERATING SYSTEMS

As in this diagram, the OS is a layer of software on top of the bare hardware, and is used to
manage all parts of computer hardware & also act as an interface between the user & the
computer.

The OS monitors & controls computer operations so that the user can do useful work on the
computer, and it also enables Application programs use the hardware in a proper, orderly and
efficient way.

An OS consists of a special program called a Supervisor (Kernel/ Executive), which is stored in


ROM of the Main Memory. The Supervisor/Kernel contains the most necessary commands and
procedures & controls the running of all other programs, each performing a particular service.

NB. The programs that make up the Operating system are too large to fit in main memory at one
time. These programs are usually installed on a direct access backing storage device, such as the
hard disk.
When the Supervisor needs a particular program, it is read from the disk & loaded into the RAM
memory, where it can be executed.

Reasons for Operating system are needed in a computer (why operating systems
were developed).

(i) Modern computer systems are so complex & fast such that they need internal control.

(ii) To ensure that the full system software facilities are readily available.

(iii) Due to the complexity of systems, jobs need to be controlled in what they are
allowed to do for security.

(iv)To increase the throughput, i.e., to increase the amount of data that can be processed
through the system in a given period of time.

(v) Improve communication between the user & the computer.

(vi)To make complex tasks very simple for the user to carry out.
(vii) It helps the computer to correct any problem that might occur.
(viii) When an error occurs that can cause the computer to stop functioning, a
diagnostic message is displayed. The meaning of the message is then checked in the
computer operations manual.

Carolyn 7
OPERATING SYSTEMS

(ix)Reduces job setup time. When one job is running, other programs can be read onto the
job queue. The Input/Output devices can also be made ready without delay.
(x) Most computers allow many programs to be run & also many users to use the system at
the same time.

Devices/resources under the control of an Operating System

A computer is composed of a set of software-controlled resources that enable movement, storage


and processing of data & information.

As a resource manager, the OS manages the following basic resources/ devices: -

1. Processor.
2. Main memory (RAM).
3. Secondary storage devices.
4. Input/Output devices and their Ports.
5. Communication devices and their Ports.
6. Files.

History of Operating Systems

Historically operating systems have been tightly related to the computer architecture, it is good
idea to study the history of operating systems from the architecture of the computers on which
they run.

Operating systems have evolved through a number of distinct phases or generations which
correspond roughly to the decades.

The 1940's - First Generations

The earliest electronic digital computers had no operating systems. Machines of the time were so
primitive that programs were often entered one bit at time on rows of mechanical switches (plug
boards). Programming languages were unknown (not even assembly languages). Operating
systems were unheard of.

Carolyn 8
OPERATING SYSTEMS

The 1950's - Second Generation

By the early 1950's, the routine had improved somewhat with the introduction of punch cards.
The General Motors Research Laboratories implemented the first operating systems in early
1950's for their IBM 701. The system of the 50's generally ran one job at a time. These were
called single-stream batch processing systems because programs and data were submitted in
groups or batches.

The 1960's - Third Generation

The systems of the 1960's were also batch processing systems, but they were able to take better
advantage of the computer's resources by running several jobs at once. So operating systems
designers developed the concept of multiprogramming in which several jobs are in main
memory at once; a processor is switched from job to job as needed to keep several jobs
advancing while keeping the peripheral devices in use.

For example, on the system with no multiprogramming, when the current job paused to wait for
other I/O operation to complete, the CPU simply sat idle until the I/O finished. The solution for
this problem that evolved was to partition memory into several pieces, with a different job in
each partition. While one job was waiting for I/O to complete, another job could be using the
CPU.

Another major feature in third-generation operating system was the technique called spooling
(simultaneous peripheral operations on line). In spooling, a high-speed device like a disk
interposed between a running program and a low-speed device involved with the program in
input/output. Instead of writing directly to a printer, for example, outputs are written to the disk.
Programs can run to completion faster, and other programs can be initiated sooner when the
printer becomes available, the outputs may be printed.

Note that spooling technique is much like thread being spun to a spool so that it may be later be
unwound as needed.

Carolyn 9
OPERATING SYSTEMS

Another feature present in this generation was time-sharing technique, a variant of


multiprogramming technique, in which each user has an on-line (i.e., directly connected)
terminal. Because the user is present and interacting with the computer, the computer system
must respond quickly to user requests, otherwise user productivity could suffer. Timesharing
systems were developed to multiprogram large number of simultaneous interactive users.

Fourth Generation

With the development of LSI (Large Scale Integration) circuits, chips, operating system entered
in the system entered in the personal computer and the workstation age. Microprocessor
technology evolved to the point that it become possible to build desktop computers as powerful
as the mainframes of the 1970s. Two operating systems have dominated the personal computer
scene: MS-DOS, written by Microsoft, Inc. for the IBM PC and other machines using the Intel
8088 CPU and its successors, and UNIX, which is dominant on the large personal computers
using the Motorola 6899 CPU family.

Functions of Operating Systems

Security –
The operating system uses password protection to protect user data and similar other techniques.
it also prevents unauthorized access to programs and user data.

1. Protection
Considering a computer system having multiple users and concurrent execution of multiple
processes, the various processes must be protected from each other's activities.
Protection refers to a mechanism or a way to control the access of programs, processes, or
users to the resources defined by a computer system. Following are the major activities of
an operating system with respect to protection:
 The OS ensures that all access to system resources is controlled.
 The OS ensures that external I/O devices are protected from invalid access attempts.
 The OS provides authentication features for each user by means of passwords.

1
Carolyn
0
OPERATING SYSTEMS

2. Control over system performance –


Monitors overall system health to help improve performance. records the response time
between service requests and system response to have a complete view of the system
health. This can help improve performance by providing important information needed to
troubleshoot problems.
3. Job accounting –
Operating system Keeps track of time and resources used by various tasks and users, this
information can be used to track resource usage for a particular user or group of user.
4. Error detecting and handling
Operating system constantly monitors the system to detect errors and avoid the
malfunctioning of computer system.

Error handling ---Errors can occur anytime and anywhere. An error may occur in CPU, in
I/O devices or in the memory hardware. Following are the major activities of an operating
system with respect to error handling −

 The OS constantly checks for possible errors.


 The OS takes an appropriate action to ensure correct and consistent computing.

5. Coordination between other software and users –


Operating systems also coordinate and assign interpreters, compilers, assemblers and other
software to the various users of the computer systems.
6. Memory Management –
The operating system manages the Primary Memory or Main Memory. Main memory is
made up of a large array of bytes or words where each byte or word is assigned a certain
address. Main memory is a fast storage and it can be accessed directly by the CPU. For a
program to be executed, it should be first loaded in the main memory. An Operating
System performs the following activities for memory management:
It keeps tracks of primary memory, i.e., which bytes of memory are used by which user
program-the memory addresses that have already been allocated and the memory addresses
of the memory that has not yet been used. In multi programming, the OS decides the order
in which process are granted access to memory, and for how long. It Allocates the memory

1
Carolyn
1
OPERATING SYSTEMS

to a process when the process requests it and deallocates the memory when the process has
terminated or is performing an I/O operation.

7. Processor Management –
In a multi programming environment, the OS decides the order in which processes have
access to the processor, and how much processing time each process has. This function of
OS is called process scheduling. An Operating System performs the following activities for
processor management.
It Keeps track of the status of processes. The program which performs this task is known as
traffic controller. It Allocates the CPU that is processor to a process. De-allocates processor
when a process is no more required.

8. Device Management –
An OS manages device communication via their respective drivers. It performs the
following activities for device management. Keeps tracks of all devices connected to
system. designates a program responsible for every device known as the Input/Output
controller. Decides which process gets access to a certain device and for how long.
Allocates devices in an effective and efficient way. Deallocates devices when they are no
longer required.
9. File Management – and Manipulation
A file system is organized into directories for efficient or easy navigation and usage. These
directories may contain other directories and other files. An Operating System carries out
the following file management activities. It keeps track of where information is stored, user
access settings and status of every file and more… These facilities are collectively known
as the file system.

File system manipulation------A file represents a collection of related information.


Computers can store files on the disk (secondary storage), for long-term storage purpose.
Examples of storage media include magnetic tape, magnetic disk and optical disk drives
like CD, DVD. Each of these media has its own properties like speed, capacity, data
transfer rate and data access methods.

1
Carolyn
2
OPERATING SYSTEMS

A file system is normally organized into directories for easy navigation and usage. These
directories may contain files and other directions. Following are the major activities of an
operating system with respect to file management −

 Program needs to read a file or write a file.

 The operating system gives the permission to the program for operation on
file.

 Permission varies from read-only, read-write, denied and so on.

 Operating System provides an interface to the user to create/delete files.

 Operating System provides an interface to the user to create/delete directories.

10. Program execution


Operating systems handle many kinds of activities from user programs to system
programs like printer spooler, name servers, file server, etc. Each of these activities is
encapsulated as a process.
A process includes the complete execution context (code to execute, data to manipulate,
registers, OS resources in use). Following are the major activities of an operating system
with respect to program management −

 Loads a program into memory.

 Executes the program.

 Handles program's execution.

 Provides a mechanism for process synchronization.

 Provides a mechanism for process communication.

 Provides a mechanism for deadlock handling.

11. I/O Operation


An I/O subsystem comprises of I/O devices and their corresponding driver software.
Drivers hide the peculiarities of specific hardware devices from the users.
An Operating System manages the communication between user and device drivers.

1
Carolyn
3
OPERATING SYSTEMS

 I/O operation means read or write operation with any file or any specific I/O
device.

 Operating system provides the access to the required I/O device when required.

 Operating System provides an interface to create the backup of file system.

12. Communication
In case of distributed systems which are a collection of processors that do not share
memory, peripheral devices, or a clock, the operating system manages communications
between all the processes. Multiple processes communicate with one another through
communication lines in the network.
The OS handles routing and connection strategies, and the problems of contention and
security. Following are the major activities of an operating system with respect to
communication:
 Two processes often require data to be transferred between them.
 Both the processes can be on one computer or on different computers, but are
connected through a computer network.
 Communication may be implemented by two methods, either by Shared Memory
or by Message Passing.
13. Resource Management
In case of multi-user or multi-tasking environment, resources such as main memory, CPU
cycles and files storage are to be allocated to each user or job. Following are the major
activities of an operating system with respect to resource management −

 The OS manages all kinds of resources using schedulers.

 CPU scheduling algorithms are used for better utilization of CPU.

Classification of Operating Systems


Operating systems can be classified according to the following:
(a) Number of users
1. A single-user operating system

1
Carolyn
4
OPERATING SYSTEMS

A single-user operating system is a type of system that has been developed and designed
to use on a computer. It can be used on a similar device, and it only has one user at a
time. It’s the most common system used for home computers. It’s also used in offices and
other work environments.

2. Multi-user operating system


It is a computer operating system (OS) that allows multiple users on different computers
or terminals to access a single system with one OS on it. These programs are often quite
complicated and must be able to properly manage the necessary tasks required by the
different users connected to it.
A multi-user operating system (OS) is a computer system that allows multiple users
that are on different computers to access a single system's OS resources simultaneously,
as shown in Figure 1. Users on the system are connected through a network. The OS
shares resources between users, depending on what type of resources the users need. The
OS must ensure that the system stays well-balanced in resources to meet each user's
needs and not affect other users who are connected. Some examples of a multi-user OS
are Unix, Virtual Memory System (VMS) and mainframe OS.

Figure 1 - Multi-user OS Handling Three Different Computers on the Network

(b) Number of Tasks Execute at a Time


1. Single Tasking

A single-tasking system can only run one program at a time, while a multi-tasking operating
system allows more than one program to be running in concurrency.

1
Carolyn
5
OPERATING SYSTEMS

2. Multitasking

Multitasking, in an operating system, is allowing a user to perform more than one


computer task (such as the operation of an application program) at a time. The operating system
is able to keep track of where you are in these tasks and go from one to the other without losing
information.
Multitasking is a logical extension of multiprogramming system that supports multiple
programs to run concurrently. In multitasking more than one task are executed at the same time.
In this technique the multiple tasks, also known as processes, share common processing
resources such as a CPU. In the case of a computer with single CPU, only one job can be
processed at a time. Multitasking solves the problem by scheduling and deciding which task
should be the running task and when a waiting task should get turn.
(c) Human Computer Interface

The term interface is used to describe the boundary across which two different "systems"
communicate. Interactions between a computer user and their computer are said to take place at
the human computer interface (or HCI). The interface allows the user to communicate
effectively with the computer and the computer to communicate with the user.

1. Command Driven Interface


In a command driven interface the user is required to type textual commands into the
computer. The computer carries out the command as soon as the enter key is pressed.
A command line interface (CLI) is a text-based user interface (UI) used to view and
manage computer files. Command line interfaces are also called command-line user
interfaces, console user interfaces and character user interfaces.

The commands may be:

 A whole word.
 An abbreviation.
 A single character.

1
Carolyn
6
OPERATING SYSTEMS

Commands can also be assigned to a function key or a combination of pressing the function (fn)
key and another key.

2. Menu-Driven Interface
Menu driven Interface employs a series of screens, or ''menus,'' that allow users to make
choices about what to do next. A menu-driven interface can use a list format or graphics,
with one selection leading to the next menu screen, until the user has completed the
desired outcome.
It is The Term used to describe a software program that is operated using file
menus instead of using commands. Below is an example of how a user may quit a menu-
driven program, as opposed to a non-menu-driven program.

3. Graphical User Interface

A graphical user interface (or GUI) uses images to make the interface easy to use. The
Windows operating system is the best-known example of a graphical user interface.

The computer operator makes use of windows, icons and menus to interact with the computer
system.

A graphical user interface is sometimes known as a WIMP environment. Wimp is short for the
different components of the interface:

Windows

Icons

Mouse

Pointer (or Pull-down menu)

The first GUI was developed in 1973 by Xerox at their Palo Alto Research centre in California.
Apple used the ideas developed by Xerox for their Macintosh computer in 1984 and Microsoft
followed with the first version of windows in 1985.

1
Carolyn
7
OPERATING SYSTEMS

A graphical user interface allows the user to work with several programs at the same time. This
is known as multitasking.

A graphical user interface needs more computing power than a command driven system. As most
of the commands needed to carry out a task are on display or are easily accessible through a
menu in a GUI they are easier to learn and use than a command driven system.

In summary: Types of Operating Systems

Types of Operating Systems: Some of the widely used operating systems are as follows:

1. Batch Operating System –


This type of operating system does not interact with the computer directly. There is an operator
which takes similar jobs having same requirement and group them into batches. It is the
responsibility of operator to sort the jobs with similar needs.

Advantages of Batch Operating System:


 It is very difficult to guess or know the time required by any job to complete. Processors of
the batch systems know how long the job would be when it is in queue
 Multiple users can share the batch systems
 The idle time for batch system is very less
 It is easy to manage large work repeatedly in batch systems
Disadvantages of Batch Operating System:

1
Carolyn
8
OPERATING SYSTEMS

 The computer operators should be well known with batch systems


 Batch systems are hard to debug
 It is sometime costly
 The other jobs will have to wait for an unknown time if any job fails

Examples of Batch based Operating System: Payroll System, Bank Statements etc.

2. Time-Sharing Operating Systems –


Each task is given some time to execute, so that all the tasks work smoothly. Each user gets time
of CPU as they use single system. These systems are also known as Multitasking Systems. The
task can be from single user or from different users also. The time that each task gets to execute
is called quantum. After this time interval is over OS switches over to next task.

Advantages of Time-Sharing OS:


 Each task gets an equal opportunity
 Less chances of duplication of software
 CPU idle time can be reduced
Disadvantages of Time-Sharing OS:
 Reliability problem
 One must have to take care of security and integrity of user programs and data
 Data communication problem
Examples of Time-Sharing OSs are: Multics, Unix etc.

1
Carolyn
9
OPERATING SYSTEMS

3. Distributed Operating System –


These types of operating system is a recent advancement in the world of computer technology
and are being widely accepted all-over the world and, that too, with a great pace. Various
autonomous interconnected computers communicate each other using a shared communication
network. Independent systems possess their own memory unit and CPU. These are referred
as loosely coupled systems or distributed systems. These system’s processors differ in size and
function. The major benefit of working with these types of operating system is that it is always
possible that one user can access the files or software which are not actually present on his
system but on some other system connected within this network i.e., remote access is enabled
within the devices connected in that network.

Advantages of Distributed Operating System:


 Failure of one will not affect the other network communication, as all systems are
independent from each other
 Electronic mail increases the data exchange speed
 Since resources are being shared, computation is highly fast and durable
 Load on host computer reduces
 These systems are easily scalable as many systems can be easily added to the network
 Delay in data processing reduces

2
Carolyn
0
OPERATING SYSTEMS

Disadvantages of Distributed Operating System:


 Failure of the main network will stop the entire communication
 To establish distributed systems the language which are used are not well defined yet
 These types of systems are not readily available as they are very expensive. Not only that
the underlying software is highly complex and not understood well yet
Examples of Distributed Operating System are- LOCUS etc.
4. Network Operating System –
These systems run on a server and provide the capability to manage data, users, groups, security,
applications, and other networking functions. These types of operating systems allow shared
access of files, printers, security, applications, and other networking functions over a small
private network. One more important aspect of Network Operating Systems is that all the users
are well aware of the underlying configuration, of all other users within the network, their
individual connections etc. and that’s why these computers are popularly known as tightly
coupled systems.

Advantages of Network Operating System:


 Highly stable centralized servers
 Security concerns are handled through servers
 New technologies and hardware up-gradation are easily integrated to the system
 Server access are possible remotely from different locations and types of systems
Disadvantages of Network Operating System:

2
Carolyn
1
OPERATING SYSTEMS

 Servers are costly


 User has to depend on central location for most operations
 Maintenance and updates are required regularly
Examples of Network Operating System are: Microsoft Windows Server 2003, Microsoft
Windows Server 2008, UNIX, Linux, Mac OS X, Novell NetWare, and BSD etc.

5. Real-Time Operating System –


These types of OSs serve the real-time systems. The time interval required to process and
respond to inputs is very small. This time interval is called response time.

Real-time systems are used when there are time requirements are very strict like missile
systems, air traffic control systems, robots etc.
Two types of Real-Time Operating System which are as follows:
 Hard Real-Time Systems:
These OSs are meant for the applications where time constraints are very strict and even
the shortest possible delay is not acceptable. These systems are built for saving life like
automatic parachutes or air bags which are required to be readily available in case of any
accident. Virtual memory is almost never found in these systems.
 Soft Real-Time Systems:
These OSs are for applications where for time-constraint is less strict.

Advantages of RTOS:

2
Carolyn
2
OPERATING SYSTEMS

 Maximum Consumption: Maximum utilization of devices and system,thus more output


from all the resources
 Task Shifting: Time assigned for shifting tasks in these systems are very less. For example
in older systems it takes about 10 micro seconds in shifting one task to another and in latest
systems it takes 3 micro seconds.
 Focus on Application: Focus on running applications and less importance to applications
which are in queue.
 Real time operating system in embedded system: Since size of programs are small,
RTOS can also be used in embedded systems like in transport and others.
 Error Free: These types of systems are error free.
 Memory Allocation: Memory allocation is best managed in these types of systems.
Disadvantages of RTOS:
 Limited Tasks: Very few tasks run at the same time and their concentration is very less on
few applications to avoid errors.
 Use heavy system resources: Sometimes the system resources are not so good and they
are expensive as well.
 Complex Algorithms: The algorithms are very complex and difficult for the designer to
write on.
 Device driver and interrupt signals: It needs specific device drivers and interrupt signals
to response earliest to interrupts.
 Thread Priority: It is not good to set thread priority as these systems are very less prone to
switching tasks.
Examples of Real-Time Operating Systems are: Scientific experiments, medical imaging
systems, industrial control systems, weapon systems, robots, air traffic control systems, etc.

Services Provided by An Operating System


 An operating system performs these services for applications:
In a multitasking operating system where multiple programs can be running at the same

2
Carolyn
3
OPERATING SYSTEMS

time, the operating system determines which applications should run in what order and
how much time should be allowed for each application before giving another application
a turn.
 It manages the sharing of internal memory among multiple applications
 It handles input and output to and from attached hardware devices, such as hard disks,
printers, and dial-up ports.
 It sends messages to each application or interactive user (or to a system operator) about
the status of operation and any errors that may have occurred.
 It can offload the management of what are called batch jobs (for example, printing) so
that the initiating application is freed from this work.
 On computers that can provide parallel processing, an operating system can manage how
to divide the program so that it runs on more than one processor at a time.

Job Control
Job control language (JCL) is a scripting language executed on an IBM mainframe
operating system. It consists of control statements that designate a specific job for the
operating system.

JCL provides a means of communication between the application program, operating


system and system hardware. there are two distinct IBM Job Control languages:
 one for the operating system lineage that begins with DOS/360 and whose latest
member is z/VSE; and
 the other for the lineage from OS/360 to z/OS, the latter now
including JES extensions, Job Entry Control Language (JECL).

Application of JCL:

Here is a list of examples of some of the implications.

2
Carolyn
4
OPERATING SYSTEMS

1. Program control: To control the execution of the command file, we shall require the
usual control structures we find in high-level programming languages – conditional
execution, iteration, composition of instructions, etc.
2. Communication with programs: We require access to the current state of a process, and
to its recent output, and we must be able to set parameters for processes. It is particularly
important to be able to determine why a process stopped.
3. Communication with the system: We must be able to determine normally accessible
facts about the state of the system – are certain files present, what time it is, etc.

Features common to DOS and OS JCL

Jobs, steps and procedures

For both DOS and OS JCL, the unit of work is the job.

A job consists of one or several steps, each of which is a request to run one specific program. For
example, before the days of relational databases, a job to produce a printed report for
management might consist of the following steps:

 a user-written program to select the appropriate records and copy them to a temporary
file;
 sort the temporary file into the required order, usually using a general-purpose utility;
 a user-written program to present the information in a way that is easy for the end-users
to read and includes other useful information such as sub-totals;
 and a user-written program to format selected pages of the end-user information for
display on a monitor or terminal.

In both DOS and OS JCL, the first "card" must be the JOB card, which:

 Identifies the job.


 Usually provides information to enable the computer services department to bill the
appropriate user department.

2
Carolyn
5
OPERATING SYSTEMS

 Defines how the job as a whole is to be run, e.g. its priority relative to other jobs in the
queue.

Procedures (commonly called procs) are pre-written JCL for steps or groups of steps, inserted
into a job. Both JCLs allow such procedures. Procs are used for repeating steps which are used
several times in one job, or in several different jobs. They save programmer time and reduce the
risk of errors. To run a procedure, one simply includes in the JCL file a single "card" which
copies the procedure from a specified file, and inserts it into the job stream. Also, procs can
include parameters to customize the procedure for each use.

Operating Systems Structures

The design of an operating system architecture traditionally follows the separation of


concerns principle. This principle suggests structuring the operating system into relatively
independent parts that provide simple individual features, thus keeping the complexity of the
design manageable.

Besides managing complexity, the structure of the operating system can influence key features
such as robustness or efficiency:

 The operating system possesses various privileges that allow it to access otherwise
protected resources such as physical devices or application memory. When these
privileges are granted to the individual parts of the operating system that require them,
rather than to the operating system as a whole, the potential for both accidental and
malicious privileges misuse is reduced.

 Breaking the operating system into parts can have adverse effect on efficiency because of
the overhead associated with communication between the individual parts. This overhead
can be exacerbated when coupled with hardware mechanisms used to grant privileges.

The following sections outline typical approaches to structuring the operating system.

Definition of monolithic operating system

2
Carolyn
6
OPERATING SYSTEMS

The monolithic operating system is a very basic operating system in which file management,
memory management, device management, and process management is directly controlled
within the kernel. All these components like file management, memory management etc. are
located within the kernel.

Monolithic architecture diagram

OR

2
Carolyn
7
OPERATING SYSTEMS

History of monolithic operating system

The monolithic operating system is also known as the monolithic kernel. This is an old type of
operating system. They were used to perform small tasks like batch processing, time sharing
tasks in banks. Monolithic kernel acts as a virtual machine which controls all hardware parts. It is
different than microkernel which has limited tasks. A microkernel is divided into two parts i.e.
kernel space and user space. Both these parts communicate with each other through IPC (Inter-
process communication). Microkernel advantage is that if one server fails then other server takes
control of it. Operating systems which use monolithic architecture were first time used in the
1970’s.

Features of the monolithic operating system

Simple structure:

This type of operating system has a simple structure. All the components needed for processing
are embedded into the kernel.

Works for smaller tasks:

It works better for performing smaller tasks as it can handle limited resources.

Communication between components:

All the components can directly communicate with each other and also with the kernel.

Fast operating system:

The code to make monolithic kernel is very fast and robust.

Limitations of a monolithic operating system

 Code written in this operating system (OS) is difficult to port.

2
Carolyn
8
OPERATING SYSTEMS

 Monolithic OS has more tendency to generate errors and bugs. The reason is that user
processes use same address locations as the kernel.
 Adding and removing features from monolithic OS is very difficult. All the code needs to
be rewritten and recompiled to add or remove any feature.

Examples of monolithic operating system

 VMS
 Linux
 OS/360
 OpenVMS
 Multics
 AIX
 BSD

Layered Architecture of Operating System


A layered operating system is an operating system that groups related functionality together,
and separates it from the unrelated. Its architectural structure resembles a layer cake. It starts at
level 0, or the hardware level and works its way up to the operator, or user.
This is an important architecture of operating system which is meant to overcome the
disadvantages of early monolithic systems. In this approach, OS is split into various layers such
that all the layers perform different functionalities.

Each layer can interact with the one just above it and the one just below it. Lowermost layer
which directly deals with the bare hardware is mainly meant to perform the functionality of I/O
communication and the uppermost layer which is directly connected with the application
program acts as an interface between user and operating system.

This is highly advantageous structure because all the functionalities are on different layers and
hence each layer can be tested and debugged separately.

The Microsoft Windows Operating System is a good example of the layered structure.

2
Carolyn
9
OPERATING SYSTEMS

Fig. Layered Architecture of Operating System

Advantages of Layered architecture:


1. Dysfunction of one layer will not affect the entire operating system
2. Easier testing and debugging due to isolation among the layers.
3. Adding new functionalities or removing the obsolete ones is very easy.

Micro-Kernel

A microkernel is the minimum software that is required to correctly implement an operating


system. This includes memory, process scheduling mechanisms and basic inter-process
communication.

A diagram that demonstrates the architecture of a microkernel is as follows:

3
Carolyn
0
OPERATING SYSTEMS

In the above diagram, the microkernel contains basic requirements such as memory, process
scheduling mechanisms and basic inter-process communication. The only software executing at
the privileged level i.e. kernel mode is the microkernel. The other functions of the operating
system are removed from the kernel mode and run in the user mode. These functions may be
device drivers, file servers, application inter-process communication etc.

The microkernel makes sure that the code can be easily managed because the services are
divided in the user space. This means that there is less code running in the kernel mode which
results in increased security and stability.

Essential Components in a Microkernel

A microkernel contains only the core functionalities of the system. A component is included in
the microkernel only if putting it outside would disrupt the functionality of the system. All the
other non-essential components are put in the user mode.

The minimum functionalities included in the microkernel are:

3
Carolyn
1
OPERATING SYSTEMS

 Memory management mechanisms like address spaces are included in the microkernel.
This also contains memory protection features.
 Processor scheduling mechanisms are also necessary in the microkernel. This contains
process and thread schedulers.
 Inter-Process Communication (IPC) is important as it is needed to manage the servers
that run their own address spaces.

Performance of a Microkernel System

Providing services in a microkernel system are much more expensive than in a normal
monolithic system. The service is obtained by sending an inter-process communication message
to the server and getting one in return. This means a context switch or a function call if the
drivers are implemented as processes or procedures respectively.

Performance therefore can be complicated in microkernel systems and may lead to some
problems. However, this issue is reducing in the modern microkernel systems created such as L4
microkernel systems.

Benefits of Microkernels

Some of the benefits of microkernels are:

 Microkernels are modular and the different modules can be replaced, reloaded, modified,
changed etc. as required. This can be done without even touching the kernel.
 Microkernels are quite secure as only those components are included that would disrupt
the functionality of the system otherwise.

 Microkernels contain fewer system crashes as compared to monolithic systems. Also, the
crashes that do occur can be handled quite easily due to the modular structure of
microkernels.

3
Carolyn
2
OPERATING SYSTEMS

CHAPTER TWO

PROCESS MANAGEMENT.

PROCESS CONCEPTS.

Definition of terms.

i. PROCESS
A process is a program in execution. The execution of a process must progress in a sequential
fashion.

It can also be defined as;


A process is defined as an entity which represents the basic unit of work to be implemented in
the system.

ii. PROGRAM
A prog ram by itself is not a process. It is a static entity made up of prog ram statement while
process is a dynamic entity. Prog ram contains the instructions to be executed by processor.

A prog ram takes a space at sing le place in main memory and continues to stay there. A prog
ram does not perform any action by itself.

ii. THREAD.

A thread is also called a light weight process. It is a flow of execution through the process code,
with its own prog ram counter, system registers and stack. Threads provide a way to improve
application performance through parallelism. Threads represent a software approach to
improving performance of operating system by reducing the overhead thread is equivalent to a
classical process.

3
Carolyn
3
OPERATING SYSTEMS

Each thread belongs to exactly one process and no thread can exist outside a process. Each
thread represents a separate flow of control. Threads have been successfully used in
implementing network servers and web
server. They also provide a suitable foundation for parallel execution of applications on shared
memory
multiprocessors. Folowing fig ure shows the working of the sing le and multithreaded processes.

Difference between Process and Thread

PROCESS THREAD
1 Process is heavy weight or resource Thread is light weight taking lesser
intensive resources
than a process.
2 Process switching needs interaction with Thread switching does not need to interact
operating system. with
operating system.
3 In multiple processing environments each All threads can share same set of open files,
process executes the same code but has its child
own memory and file resources processes.
If one process is blocked then no other While one thread is blocked and waiting,
4 process can execute until the first process second
is thread in the same task can run.
unblocked.
5 Multiple processes without using threads Multiple threaded processes use fewer
use resources.
more resources.
6 In multiple processes each process operates One thread can read, write or change
another
thread's data.

3
Carolyn
4
OPERATING SYSTEMS

3
Carolyn
5
OPERATING SYSTEMS

Advantages of Threads

 Thread minimize context switching time.


 Use of threads provides concurrency within a process.
 Efficient communication.
 Economy- It is more economical to create and context switch threads.
 Utilization of multiprocessor architectures to a greater scale and efficiency.

Types of Thread
Threads are implemented in following two ways
 User Level Threads -- User managed threads
 Kernel Level Threads -- Operating System managed threads acting on kernel, an
operating system core.

User Level Threads


In this case, application manages thread management kernel is not aware of the existence of
threads. The thread library contains code for creating and destroying threads, for passing
message and data between threads, for scheduling thread execution and for saving and restoring
thread contexts. The application begins with a sing le thread and beg ins running in that thread.

3
Carolyn
6
OPERATING SYSTEMS

Kernel Level Threads

 In this case, thread management is done by the Kernel. There is no thread management
code in the application area. Kernel threads are supported directly by the operating
system. Any application can be programmed to be multithreaded. All of the threads
within an application are supported within a single process. Operating System managed
threads acting on kernel, an operating system core.

The Kernel maintains context information for the process as a whole and for individuals threads
within the process. Scheduling by the Kernel is done on a thread basis. The Kernel performs
thread creation, scheduling and management in Kernel space. Kernel threads are generally
slower to create and manage than the user threads.

3
Carolyn
7
OPERATING SYSTEMS

Advantages

Kernel can simultaneously schedule multiple threads from the same process on multiple
processes.
 If one thread in a process is blocked, the Kernel can schedule another thread of the same
process.
 Kernel routines themselves can be multithreaded.
Disadvantages

 Kernel threads are generally slower to create and manage than the user threads.
 Transfer of control from one thread to another within the same process requires a mode
switch to the Kernel.

Difference between User-Level & Kernel-Level Thread

S.N. User-Level Threads Kernel-Level Thread

1 User-level threads are faster to create and manage. Kernel-level threads are slower to
create and manage.

2 Implementation is by a thread library at the user Operating system supports creation of


level. Kernel threads.

3 User-level thread is generic and can run on any Kernel-level thread is specific to the
operating system. operating system.

4 Multi-threaded applications cannot take advantage Kernel routines themselves can be


of multiprocessing. multithreaded.

3
Carolyn
8
OPERATING SYSTEMS

4. PROCESS CONTROL BLOCK (PCB)

Each process is represented in the operating system by a process control block (PCB) also called
a task control block. PCB is the data structure used by the operating system. Operating system
groups all information that needs about particular process.
PCB contains many pieces of information associated with a specific process which are described
below.

Process Control Block is a data structure that contains information of the process related to it.
The process control block is also known as a task control block, entry of the process table, etc.

It is very important for process management as the data structuring for processes is done in terms
of the PCB. It also defines the current state of the operating system.

Process control block includes CPU scheduling, I/O resource management, file management
information etc. The PCB serves as the repository for any information which can vary from
process to process. Loader/linker sets flag s and registers when a process is created. If that
process g et suspended, the contents of the registers are saved on a stack and the pointer to the
particular stack frame is stored in the PCB. By this technique, the hardware state can be restored
so that the process can be scheduled to run again.

3
Carolyn
9
OPERATING SYSTEMS

Structure of the Process Control Block

The process control stores many data items that are needed for efficient process management.
Some of these data items are explained with the help of the given diagram

The following are the data items and their functions.

Process State

This specifies the process state i.e. new, ready, running, waiting or terminated.

Process Number

This shows the number of the particular process.

Program Counter

This contains the address of the next instruction that needs to be executed in the process.

Registers

This specifies the registers that are used by the process. They may include accumulators, index
registers, stack pointers, general purpose registers etc.

List of Open Files

4
Carolyn
0
OPERATING SYSTEMS

These are the different files that are associated with the process

CPU Scheduling Information

The process priority, pointers to scheduling queues etc. is the CPU scheduling information that is
contained in the PCB. This may also include any other scheduling parameters.
Memory Management Information

The memory management information includes the page tables or the segment tables depending
on the memory system used. It also contains the value of the base registers, limit registers etc.

I/O Status Information

This information includes the list of I/O devices used by the process, the list of files etc.

Accounting information

The time limits, account numbers, amount of CPU used, process numbers etc. are all a part of the
PCB accounting information.

Location of the Process Control Block

The process control block is kept in a memory area that is protected from the normal user access.
This is done because it contains important process information. Some of the operating systems
place the PCB at the beginning of the kernel stack for the process as it is a safe location.

IV. PROCESS STATES

In the Operating System, a Process is something that is currently under execution. So, an
active program can be called a Process. For example, when you want to search something on
web then you start a browser. So, this can be process. Another example of process can be
starting your music player to listen to some cool music of your choice.

4
Carolyn
1
OPERATING SYSTEMS

A Process has various attributes associated with it. Some of the attributes of a Process are:

 Process Id: Every process will be given an id called Process Id to uniquely identify
that process from the other processes.

 Process state: Each and every process has some states associated with it at a particular
instant of time. This is denoted by process state. It can be ready, waiting, running, etc.

 CPU scheduling information: Each process is executed by using some process


scheduling algorithms like FCSF, Round-Robin, SJF, etc.

 I/O information: Each process needs some I/O devices for their execution. So, the
information about device allocated and device need is crucial.

States of a Process
During the execution of a process, it undergoes a number of states.

New

This is the state when the process has just been created. It is the initial state in the process life
cycle.

Ready

In the ready state, the process is waiting to be assigned the processor by the short term scheduler,
so it can run. This state is immediately after the new state for the process.

Ready Suspended

The processes in ready suspended state are in secondary memory. They were initially in the
ready state in main memory but lack of memory forced them to be suspended and gets placed in
the secondary memory.

Running

4
Carolyn
2
OPERATING SYSTEMS

The process is said to be in running state when the process instructions are being executed by the
processor. This is done once the process is assigned to the processor using the short-term
scheduler.

Blocked

The process is in blocked state if it is waiting for some event to occur. This event may be I/O as
the I/O events are executed in the main memory and don't require the processor. After the event
is complete, the process again goes to ready state.

Blocked Suspended

This is similar to ready suspended. The processes in blocked suspended state are in secondary
memory. They were initially in the blocked state in main memory waiting for some event but
lack of memory forced them to be suspended and gets placed in the secondary memory. A
process may go from blocked suspended to ready suspended if its work is done.

Terminated

The process is terminated once it finishes its execution. In the terminated state, the process is
removed from main memory and its process control block is also deleted.

4
Carolyn
3
OPERATING SYSTEMS

4
Carolyn
4
OPERATING SYSTEMS

Difference Between Process And Program

Process Program
A process is program in execution A program is set of instructions
A process is an active/dynamic entity A program is a passive/static entity
A process has a limited life span.It is created
A program has a longer life span. It is stored
when execution starts and terminated as
on disk forever
execution is finished
A process contains various resources like
A program is stored on disk in some file. It
memory address,disk,printer etc as per
does not conatin any other resource
requirement
A process contains memory address which is A program requires memory space on disk to
called address space store all instructions

V. CONCURRENCY CONTROL

Concurrent control is the process of managing simultaneous operations in a process without


having them interfere with one another.

i) Inter-process communication

Inter Process Communication (IPC) refers to a mechanism, where the operating systems allow
various processes to communicate with each other. This involves synchronizing their actions and
managing shared data.

It is a set of programming interface which allow a programmer to coordinate activities among


various program processes which can run concurrently in an operating system. This allows a
specific program to handle many user requests at the same time.

4
Carolyn
5
OPERATING SYSTEMS

ii) Synchronization
Process Synchronization means sharing system resources by processes in a such a way that,
Concurrent access to shared data is handled thereby minimizing the chance of inconsistent data.
Maintaining data consistency demands mechanisms to ensure synchronized execution of
cooperating processes.

a) Semaphores
In computer science, a semaphore is a variable or abstract data type used to control
access to a common resource by multiple processes in a concurrent system such as a
multitasking operating system. A semaphore is simply a variable it is for signaling from
zone task to another

There are 3-types of semaphores namely Binary, Counting and Mutex semaphore.

i) Binary Semaphore: Binary semaphore is used when there is only one shared resource.
Binary semaphore exists in two states ie. Acquired(Take), Released(Give).
Binary semaphores have no ownership and can be released by any task or ISR regardless
of who performed the last take operation.
ii) Counting Semaphore: To handle more than one shared resource of the same type,
counting semaphore is used. Counting semaphore will be initialized with the count(N)
and it will allocate the resource as long as count becomes zero after which the requesting
task will enter blocked state. •

iii) Mutex Semaphore: Mutex is very much similar to binary semaphore and takes care of
priority inversion, ownership, and recursion.

B) Monitors: To overcome the timing errors that occurs while using semaphore for process
synchronization, the researchers have introduced a high-level synchronization construct
i.e. the monitor type. A monitor type is an abstract data type that is used for process
synchronization.

4
Carolyn
6
OPERATING SYSTEMS

Being an abstract data type monitor type contains the shared data variables that are to
be shared by all the processes and some programmer-defined operations that allow
processes to execute in mutual exclusion within the monitor. A process can not directly
access the shared data variable in the monitor; the process has to access it through
procedures defined in the monitor which allow only one process to access the shared
variables in a monitor at a time.

Advantages of Monitor:

Monitors have the advantage of making parallel programming easier and less error prone
than using techniques such as semaphore

Differences Between Semaphore and Monitor

1. The basic difference between semaphore and monitor is that the semaphore is an integer
variable S which indicate the number of resources available in the system whereas,
the monitor is the abstract data type which allows only one process to execute in critical
section at a time.

2. The value of semaphore can be modified by wait() and signal() operation only. On the
other hand, a monitor has the shared variables and the procedures only through which
shared variables can be accessed by the processes.

3. In Semaphore when a process wants to access shared resources the process


performs wait() operation and block the resources and when it release the resources it
performs signal() operation. In monitors when a process needs to access shared resources,
it has to access them through procedures in monitor.

4. Monitor type has condition variables which semaphore does not have.

c. Message passing

4
Carolyn
7
OPERATING SYSTEMS

Message Passing Model of Process Communication. Process communication is the mechanism


provided by the operating system that allows processes to communicate with each other.
Message passing model allows multiple processes to read and write data to the message queue
without being connected to each other.
In computer science, message passing is a technique for invoking behavior (i.e., running a
program) on a computer. The invoking program sends a message to a process (which may be
an actor or object) and relies on that process and its supporting infrastructure to select and then
run the code it selects. Message passing differs from conventional programming where a process,
subroutine, or function is directly invoked by name. Message passing is key to some models of
concurrency and object-oriented programming.

Message passing is used ubiquitously in modern computer software. It is used as a way for the
objects that make up a program to work with each other and as a means for objects and systems
running on different computers (e.g., the Internet) to interact. Message passing may be
implemented by various mechanisms, including channels.

PROCESS SCHEDULING

Process scheduling refers to a set of policies and mechanisms control the order of work to be
performed by a computer system of all the resources in a computer system that scheduled before
use, the CPU is by far the most important.

The process scheduling is the activity of the process manager that handles the removal of the
running process from the CPU and the selection of another process on the basis of a particular
strategy. Process scheduling is an essential part of a Multiprogramming operating systems.

There are three kinds of schedulers:

Schedulers are special system software which handle process scheduling in various ways. Their
main task is to select the jobs to be submitted into the system and to decide which process to
run. Schedulers are of three types −

 Long-Term Scheduler

 Short-Term Scheduler

4
Carolyn
8
OPERATING SYSTEMS

 Medium-Term Scheduler

Long Term Scheduler

It is also called a job scheduler. A long-term scheduler determines which programs are
admitted to the system for processing. It selects processes from the queue and loads them into
memory for execution. Process loads into the memory for CPU scheduling.

The primary objective of the job scheduler is to provide a balanced mix of jobs, such as I/O
bound and processor bound. It also controls the degree of multiprogramming. If the degree of
multiprogramming is stable, then the average rate of process creation must be equal to the
average departure rate of processes leaving the system.

On some systems, the long-term scheduler may not be available or minimal. Time-sharing
operating systems have no long term scheduler. When a process changes the state from new to
ready, then there is use of long-term scheduler.

Short Term Scheduler

It is also called as CPU scheduler. Its main objective is to increase system performance in
accordance with the chosen set of criteria. It is the change of ready state to running state of the
process. CPU scheduler selects a process among the processes that are ready to execute and
allocates CPU to one of them.

Short-term schedulers, also known as dispatchers, make the decision of which process to
execute next. Short-term schedulers are faster than long-term schedulers.

Medium Term Scheduler

Medium-term scheduling is a part of swapping. It removes the processes from the memory. It
reduces the degree of multiprogramming. The medium-term scheduler is in-charge of handling
the swapped out-processes.

4
Carolyn
9
OPERATING SYSTEMS

A running process may become suspended if it makes an I/O request. A suspended processes
cannot make any progress towards completion. In this condition, to remove the process from
memory and make space for other processes, the suspended process is moved to the secondary
storage. This process is called swapping, and the process is said to be swapped out or rolled
out. Swapping may be necessary to improve the process mix.

5
Carolyn
0
OPERATING SYSTEMS

Comparison among Scheduler

S.N. Long-Term Scheduler Short-Term Scheduler Medium-Term Scheduler

1 It is a job scheduler It is a CPU scheduler It is a process swapping


scheduler.

2 Speed is lesser than short Speed is fastest among Speed is in between both short
term scheduler other two and long term scheduler.

3 It controls the degree of It provides lesser control It reduces the degree of


multiprogramming over degree of multiprogramming.
multiprogramming

4 It is almost absent or minimal It is also minimal in time It is a part of Time sharing


in time sharing system sharing system systems.

5 It selects processes from pool It selects those processes It can re-introduce the process
and loads them into memory which are ready to execute into memory and execution
for execution can be continued.

Context Switch

A context switch is the mechanism to store and restore the state or context of a CPU in Process
Control block so that a process execution can be resumed from the same point at a later time.
Using this technique, a context switcher enables multiple processes to share a single CPU.
Context switching is an essential part of a multitasking operating system features.

When the scheduler switches the CPU from executing one process to execute another, the state
from the current running process is stored into the process control block. After this, the state for
the process to run next is loaded from its own PCB and used to set the PC, registers, etc. At that
point, the second process can start executing.

5
Carolyn
1
OPERATING SYSTEMS

PROCESS SCHEDULING ALGORITHMS

A Process Scheduler schedules different processes to be assigned to the CPU based on


particular scheduling algorithms. These algorithms are either non-preemptive or preemptive.
Non-preemptive algorithms are designed so that once a process enters the running state, it
cannot be preempted until it completes its allotted time, whereas the preemptive scheduling is
based on priority where a scheduler may preempt a low priority running process anytime when
a high priority process enters into a ready state.

There are six popular process scheduling algorithms which we are going to discuss in this
chapter −

 First-Come, First-Served (FCFS) Scheduling

 Shortest-Job-Next (SJN) Scheduling

 Priority Scheduling

 Shortest Remaining Time

 Round Robin(RR) Scheduling

 Multiple-Level Queues Scheduling

First Come First Serve (FCFS)

 Jobs are executed on first come, first serve basis.

 It is a non-preemptive, pre-emptive scheduling algorithm.

 Easy to understand and implement.

 Its implementation is based on FIFO queue.

 Poor in performance as average wait time is high.

5
Carolyn
2
OPERATING SYSTEMS

Wait time of each process is as follows −

process Wait Time : Service Time - Arrival Time

P0 0-0=0

P1 5-1=4

P2 8-2=6

P3 16 - 3 = 13

Average Wait Time: (0+4+6+13) / 4 = 5.75

Shortest Job Next (SJN)

 This is also known as shortest job first, or SJF


 This is a non-preemptive, pre-emptive scheduling algorithm.
 Best approach to minimize waiting time.
 Easy to implement in Batch systems where required CPU time is known in advance.

5
Carolyn
3
OPERATING SYSTEMS

 Impossible to implement in interactive systems where required CPU time is not known.
 The processer should know in advance how much time process will take.

5
Carolyn
4
OPERATING SYSTEMS

Given: Table of processes, and their Arrival time, Execution time

Process Arrival Time Execution Time Service Time

P0 0 5 0

P1 1 3 5

P2 2 8 14

P3 3 6 8

Shortest Job First Scheduling Algorithm


Waiting time of each process is as follows −

Proces Waiting Time


s

P0 0-0=0

P1 5-1=4

P2 14 - 2 = 12

P3 8-3=5

Average Wait Time: (0 + 4 + 12 + 5)/4 = 21 / 4 = 5.25

Priority Based Scheduling

 Priority scheduling is a non-preemptive algorithm and one of the most common


scheduling algorithms in batch systems.

5
Carolyn
5
OPERATING SYSTEMS

 Each process is assigned a priority. Process with highest priority is to be executed first
and so on.
 Processes with same priority are executed on first come first served basis.
 Priority can be decided based on memory requirements, time requirements or any other
resource requirement.
Given: Table of processes, and their Arrival time, Execution time, and priority. Here we are
considering 1 is the lowest priority.

Process Arrival Time Execution Time Priority Service Time

P0 0 5 1 0

P1 1 3 2 11

P2 2 8 1 14

P3 3 6 3 5

Waiting time of each process is as follows −

Proces Waiting Time


s

P0 0-0=0

P1 11 - 1 = 10

P2 14 - 2 = 12

P3 5-3=2

5
Carolyn
6
OPERATING SYSTEMS

Average Wait Time: (0 + 10 + 12 + 2)/4 = 24 / 4 = 6

Shortest Remaining Time

 Shortest remaining time (SRT) is the preemptive version of the SJN algorithm.
 The processor is allocated to the job closest to completion but it can be preempted by a
newer ready job with shorter time to completion.
 Impossible to implement in interactive systems where required CPU time is not known.
 It is often used in batch environments where short jobs need to give preference.

Round Robin Scheduling

 Round Robin is the preemptive process scheduling algorithm.


 Each process is provided a fix time to execute, it is called a quantum.
 Once a process is executed for a given time period, it is preempted and other process
executes for a given time period.
 Context switching is used to save states of preempted processes.

Wait time of each process is as follows −

Proces Wait Time : Service Time - Arrival Time


s

P0 (0 - 0) + (12 - 3) = 9

P1 (3 - 1) = 2

P2 (6 - 2) + (14 - 9) + (20 - 17) = 12

5
Carolyn
7
OPERATING SYSTEMS

P3 (9 - 3) + (17 - 12) = 11

Average Wait Time: (9+2+12+11) / 4 = 8.5

Multiple-Level Queues Scheduling

Multiple-level queues are not an independent scheduling algorithm. They make use of other
existing algorithms to group and schedule jobs with common characteristics.

 Multiple queues are maintained for processes with common characteristics.


 Each queue can have its own scheduling algorithms.
 Priorities are assigned to each queue.
For example, CPU-bound jobs can be scheduled in one queue and all I/O-bound jobs in another
queue. The Process Scheduler then alternately selects jobs from each queue and assigns them to
the CPU based on the algorithm assigned to the queue.

DEADLOCKS
Deadlock - Occurs when resources needed by one process are held by some other waiting
process.
Deadlock not only occurs in OS.
Kansas state legislature in the early 20th century passed the following legislation:
"When two trains approach each other at a crossing, both shall come to a full stop and neither
shall start up again until the other has gone. "
Assume we have the following operating system:
Finite number of resources to be distributed among some number of competing processes
Resources may be of several types and there may be several instances of each. When a
process requests a resource any instance of that resource will satisfy the process. A Processes
can:
 request a resource
 use the resource
 release the resource

5
Carolyn
8
OPERATING SYSTEMS

A set of processes is in a deadlock state when every process in the set is waiting for an event
that can be caused only by another process in the set.
Same resource type - three tape drives, three processes request a tape drive then they each
request another. Dining philosophers request chopsticks held by another.
Different resource type - process A has a printer process B has a file, Each requests the
other's resource.
Four Necessary Conditions for Deadlock
 Mutual exclusion: At least one resource is not sharable, i.e. can only be used by one
process at a time
 Hold and wait: A process holds at least one resource and requests resources held by
other processes
 No preemption: resource cannot be preempted, it must be voluntarily released by the
process.
 Circular wait: Given a set of processes { P1, P2, P3, …Pn} P1 has a resource needed
by P2, P2 has a resource needed by P3, …, Pn has a resource needed by P1.

System resource-allocation graph


G = (V,E) where V is a set of vertices and E is a set of edges.
The set of vertices is partitioned into Processes and Resources. A resource-allocation graph is a
directed graph where an edge from process P i to resource Ri indicates that Pi has requested Ri
(request edge). An edge from Ri to Pi indicates that Ri has been allocated to Pi (assignment
edge).
When drawing the graph, processes are represented by circles and resources by squares.
Multiple instances of a resource are represented by a dot in the square.
When a process requests a resource a request edge is drawn. When the resource is allocated, the
resource edge becomes an assignment edge. Resource edges points to the resource square, but
assignment edges are from the dots within the square.

R1 R2

P1 P2 P3

5
Carolyn
R3
9
R4
OPERATING SYSTEMS

Resource allocation graph, no deadlock

If the graph contains no cycles, then there is no deadlock


If the graph contains a cycle then a deadlock condition may exist.

R1 R2

P1 P2 P3

R3
R4

6
Carolyn
0
OPERATING SYSTEMS

Resource allocation graph with deadlock.

P2
R1

P3

P1

P4

R2
Resource allocation graph with cycle and no deadlock.
P4 can release an instance of R2 and P3 will be assigned the resource

How can we handle deadlocks?


Try to prevent them from happening
After system is deadlocked employ some mechanism to detect the deadlock and then recover
from deadlock.
Ignore the problem, theoretically rare, and pretend deadlocks never occur (UNIX) Since
deadlocks are infrequent this may be cheaper alternative

6
Carolyn
1
OPERATING SYSTEMS

Deadlock Prevention - Make sure one of the 4 necessary conditions for deadlock doesn't hold.

1. Mutual Exclusion - some resources are sharable. Some cannot be shared or be made
sharable. A printer can be used by only one process at a time.
2. Hold and Wait - Whenever a process requests one resource make sure it is not holding
another resource.
method 1 - request all resources before it begins execution,
method 2 - request resources only when process has none, it may request some resources,
use them and then release all resources.
Downside - method 1 - assume process needs to copy file from tape to disk and then print.
It will be holding printer entire time even though it needs it only at the end.
Downside method 2 - process can request tape and disk, release these and then request
printer. Only if file remains on disk!! No guarantee.
Poor resource utilization, chance of starvation

3. No Preemption-
method 1 - process holds resources and needs another resource that is not in use. Process
must release all resources, then waits on released resources in addition to the needed one
method 2 - check if resources are available when requested, if so allocate them. If not check
if they are allocated to another process that is waiting for resources. If so, preempt the
desired resources from the waiting process and allocate them to the requesting process. If
the requested resource is not available, then the process waits. It can then be preempted
by another process in the same way.

6
Carolyn
2
OPERATING SYSTEMS

4. Circular Wait
Place an ordering on the resource types. Processes have to request resources in increasing
order of numbering
Alternately, when a process requests a resource of a lower number, resources of higher
number are deallocated.

deadlock prevention can have the side effect of reduced system throughput and low device
utilization.

Deadlock Avoidance
Processes declare in advance what its maximum resource usage will be. Use this a priori
information about a process' resource needs to make sure a system never enters deadlock.
(deadlock avoidance approach) Dynamically makes sure that the circular wait conditions doesn't
exist.

Def. A system is in a safe state if it can allocate resources to each process, in some order and
avoid deadlock.

A system is a safe system if there exists a safe sequence. A sequence of processes <P 1, P2, P3,
…, Pn > is a safe sequence for the current allocation state if for each P i, the resources that Pi can
still request can be satisfied by the currently available resources plus the resources held by all the
Pj, with j<i.

In this state if a process requests a resource and it isn't currently available, it can wait until the
other processes finish and then use the resources. The next process in the sequence can finish in
the same manner. If this doesn't happen the state is unsafe. Unsafe states may lead to deadlock.

Maximum Needs Current Needs


P0 10 5
P1 4 2

6
Carolyn
3
OPERATING SYSTEMS

P2 9 2

If system has 12 tape drives we are in a safe state because, < P 1, P0, P2> satisfies the safety
condition.

If we are in the above state and grant P2 1 more tape drive the system is no longer safe.
Resource-Allocation Graph Algorithm
Used only when there is one instance of each resource.

Request edges, assignment edges, new edge called claim edge.

Claim edge from process to resource indicate process may request that resource sometime in the
future. Represented by a dashed arrow. Direction is the same as a request edge.

When process requests resource, claim edge becomes request edge. When process releases a
resource, assignment edge becomes claim edge.

If a process requests a resource, the request can be granted only if converting the request edge to
an assignment edge does not result in a cycle in the graph.

R1

P1 P2

R22

6
Carolyn
4
OPERATING SYSTEMS

Safe, but if P2 were to request R2 then we would not be in a safe state

Banker's Algorithm
Used for multiple resource instances.
Processes must declare maximum instances of each resource that it will need. Number can't
exceed the total number of resources in the system
Process will be allocated resources only if this results in a safe state. Otherwise process
waits until some other process releases resources.

Given two vectors (arrays), X and Y of length n, X <= Y iff X[i] <= Y[i] for all i = 1,2,…,n.
i.e. if X = (1,7,3,2) and Y = (0,3,2,1) then Y <=X.
int available [j] //each array position indicates the number of instances of
// resource Rj
int max [n][m] // Rows represent processes, columns resources, value is
//maximum number of resource Rj needed by process Pi
int allocation [n][m] //number of resources of each type allocated to each
//process
int need [n][m] //remaining resource needs of each process.
Safety Algorithm:
<step 1>
int work[m]
int finish [n]

work = available;

for (i = 0; i < n; i++) finish [i] = false;

6
Carolyn
5
OPERATING SYSTEMS

<step 2 >
Find i such that both
finish [i] = false
needi <= work //let needi be the need vector for process Pi

if i doesn't exist goto step 4.

<step 3>
Work = work + allocation
finish[i] = true
go to step 2

<step 4>
If finish[i] = true for all i then the system is in a safe state.

requires order m x n2 operations to decide whether a state is safe


Resource Request Algorithm

Step 1
if requesti < needi goto step 2, else error request is above maxium

step 2

if requesti < available go to step 3, else process waits

step 3
System pretends to allocate resource

6
Carolyn
6
OPERATING SYSTEMS

available = available - request


allocation = allocation + request
need = need - request

6
Carolyn
7
OPERATING SYSTEMS

If this results in a safe state, then resource is allocated, else it is not.

Allocation Max Available


ABC ABC ABC
0 010 753 332
1 200 322
2 302 902
3 211 222
4 002 433

Need (Max - Allocation)


ABC
0 743
1 122
2 600
3 011
4 431

System is safe. (P1, P3, P4, P2, P0 ) is a safe sequence.


What happens if P1 requests one of A, two of type C (1 0 2)

We now have the following state:

6
Carolyn
8
OPERATING SYSTEMS

Allocation Max Available


ABC ABC ABC
0 010 753 230
1 302 020
2 302 902
3 211 222
4 002 433

This is safe since <P1, P3, P4, P0, P2> is a safe sequence

P4 requests 3 3 0?
P0 requests 0 2 0?

Deadlock Detection
In absence of deadlock prevention and avoidance, we need deadlock detection.
Consists of two things: algorithm to detect deadlock, algorithm to recover from deadlock.
detection - recovery includes the overhead incurred by recovery from deadlock and algorithms.
Wait-for graph - Used for single resource instances. - Obtained from resource-allocation graph
by removing resource nodes and deleting appropriate edges.

P5

R1 R3 R4

6
Carolyn
P1 P2 P3 9
OPERATING SYSTEMS

P5

P1 P2 P3

P4

int available [m] //each array position indicates the number of instances of
// resource Rj

int request [n][m] // indicates current request of each process

int allocation [n][m] //number of resources of each type allocated to each


//process

<step 1>
int work[m]
int finish [n]

work = available;

for (i = 0; i < n; i++) if allocationi != 0 finish [i] = false else finish[i] = true

7
Carolyn
0
OPERATING SYSTEMS

<step 2 >
Find i such that both
finish [i] = false
requesti <= work //let requesti be the request vector for process Pi

if i doesn't exist goto step 4.

<step 3>
work = work + allocation
finish[i] = true
go to step 2

<step 4>
If finish[i] = false for all i then the process Pi is in deadlock.

Allocation Request Available


ABC ABC ABC
0 010 000 000
1 200 202
2 303 000
3 211 100
4 002 002

Not deadlocked p0, p2, p3, p1, p4 is a safe sequence

7
Carolyn
1
OPERATING SYSTEMS

Allocation Request Available


ABC ABC ABC
0 010 000 000
1 200 202
2 303 001
3 211 100
4 002 002

P2 makes additional request for 1 more item.


Now deadlocked.

Overhead in running detection algorithms. Run each time a request is made?? Run as often as
deadlock is to occur. (Once per hour, or when CPU utilization drops below 40%.
Recover from deadlock.

Tell operator, he decides who shall live and who shall die!

Abort all deadlocked processes - cost is high. All work done by processes is lost.

Abort one at a time until deadlock no longer exists - need to run deadlock detection algorithm
after each process is aborted - high overhead this way.

Resource Preemption –Choose a blocked resource


–Preempt it (releasing its resources)
–Run the detection algorithm
–Iterate if until the state is not a deadlock state

7
Carolyn
2
OPERATING SYSTEMS

Problem: Starvation – same process may always be picked as victim, include number of
rollback in cost factor.

7
Carolyn
3
OPERATING SYSTEMS
CHAPTER 3
MEMORY MANAGEMENT
1. a) Memory management is the functionality of an operating system which handles or
manages primary memory. Memory management keeps track of each and every memory location
either it is allocated to some process or it is free. It checks how much memory is to be allocated to
processes. It decides which process will get memory at what time. It tracks whenever some memory
gets freed or unallocated and correspondingly it updates the status.
Memory management provides protection by using two registers, a base register and a limit
register. The base register holds the smallest legal physical memory address and the limit
register specifies the size of the range. For example, if the base register holds 300000 and the
limit register is 1209000, then the program can legally access all addresses from 300000
through 411999.

The address translation from


Instructions / data to memory addresses can be done in following ways;
Compile time -- When it is known at compile time where the process will reside, compile
time binding is used to generate the absolute code. This helps the system overall as –
 It is faster to load an existing executable format process
Note – There are chances for the program itself to crash as the defined address space
(physical) may already be preoccupied by another process. In this case crash may happen and
the system itself or with user intervention has to re-compile the whole program.
Load time -- When it is not known at compile time where the process will reside in memory,
then the compiler generates re-locatable code. Loader does –
 Job of the loader is to translate the above relocatable address to an absolute address
 Adds : Base address to all, logical addresses, in turn to generate absolute address.
Note – Reloading of the whole process is necessary if in case the base address itself has
changed in the mean time,
Memory Management Page 74 of 157
OPERATING SYSTEMS
Execution time -- If the process can be moved during its execution from one memory
segment to another, then binding must be delayed to be done at run time. We all know that at
the time of execution of a process we may need dynamic memory to save intermediate results
or remove the results, i.e. allocation and de-allocation is inadvertent at execution time.
Adding addresses as follows –
1. CPU generated logical address
o 245
2. MMU generated relocation register, which is also called as base register
o 8000
3. So finally physical memory location will be
o 8000 + 245 = 8245

b) Roles of memory management

 Upgrading performance - Memory management also helps in upgrading the performance of


computer system. Due to memory management of the computer system, the computer system
remains stable and gives a good performance as a result.
 Execution of multiple processes - Memory management enables the execution of multiple
processes at the same time in the computer system.
 Utilization of memory space - Memory management shares the same memory space among
different processes. Hence we can perform many tasks at a particular memory space.
 allocation of main memory space to processes
 Provision to share the information - An ideal memory management system must facilitate
sharing of data among multiple processes.
 Correct relocation of data - The data should be relocated to and from the main memory in
such a manner that the currently running process is not affected. For example, if two processes are
sharing the same data then the memory management system relocate this data only after ensuring
that the two process are no longer referencing the data.
 Protection of data from illegal change - The memory management system of the operating
system should ensure that a process is able to access only that data for which it has the requisite
access and it should be prohibited from accessing the data of other processes.

2. a) Loading

Memory Management Page 75 of 157


OPERATING SYSTEMS
The operating system loads a library of function during processing different programs. Files
are brought into the memory which are needed as the processing of the programs takes place.
Generally, the loading process is divided into two categories, static and dynamic loading.
The process can vary depending on the loading method where the loading can take place at
once, or at random time. In the case of static loading, the load process doesn’t change over
time, on the other hand, in the case of dynamic loading, it changes with time.

Memory Management Page 76 of 157


OPERATING SYSTEMS
b) Linking
Is the process of collecting and combining various modules of code and data into a
executable file that can be loaded into memory and executed. Operating system can link
system level libraries to a program. When it combines the libraries at load time, the linking is

called static linking and when this linking is done at the time of execution, it is called as
dynamic linking.
In static linking, libraries linked at compile time, so program code size becomes bigger
whereas in dynamic linking libraries linked at execution time so program code size remains
smaller.

c) Logical/virtual versus Physical Address Space


An address generated by the CPU is a logical address whereas address actually available on
memory unit (hardware) is a physical address. Logical address is also known a Virtual
address.
The CPU allocates logical memory address to any given physical address where actual data is
stored. Since, in real case scenario related data may be stored at different contiguous
locations. Logical address generated by CPU may help in viewing the data stored as
contiguous.
CPU also, performs the job of fetching the memory / instructions based on the program
counter. It may load or store data in those physical address based array of words.
There are two different types of addresses in the system –
 Logical or Virtual

Memory Management Page 77 of 157


OPERATING SYSTEMS
 Physical
The physical address is where the actual data is stored, and the logical address generated by
CPU is how the system will see the addresses.
Physical memory (RAM) is divided into pages contiguous sequences of memory typically in
the 4KB-16KB range. Physical addresses are provided by the hardware.
 One physical address space per machine
 Valid addresses are usually between 0 and some machine-specific
 Not all addresses have to belong to the machine’s main memory.
 Other hardware devices can be mapped into the address space.
Virtual (or logical) addresses are provided by the OS kernel:
 One virtual address space per process
 Addresses may start at zero, but not necessarily
 Space may consist of several segments
Virtual and physical addresses are the same in compile-time and load-time address-binding
schemes. Virtual and physical addresses differ in execution-time address-binding scheme.
The set of all logical addresses generated by a program is referred to as a logical address
space. The set of all physical addresses corresponding to these logical addresses is referred to
as a physical address space.
The run-time mapping from virtual to physical address is done by the memory management
unit (MMU) which is a hardware device also called address translator unit. MMU uses
following mechanism to convert virtual address to physical address.

The value in the base register is added to every address generated by a user process which is
treated as offset at the time it is sent to memory. For example, if the base register value is
10000, then an attempt by the user to use address location 100 will be dynamically
reallocated to location 10100.
MMU Additional does –
 Checks for protection violations
 Raises exceptions when necessary (e.g., write operation on read only memory region).
The user program deals with virtual addresses; it never sees the real physical addresses.

Memory Management Page 78 of 157


OPERATING SYSTEMS

3. Fragmentation
As processes are loaded and removed from memory, the free memory space is broken into
little pieces. It happens after sometimes that processes cannot be allocated to memory blocks
considering their small size and memory blocks remains unused. This problem is known as
Fragmentation.
Fragmentation is of two types:
a) External fragmentation - Total memory space is enough to satisfy a request or to reside a
process in it, but it is not contiguous so it cannot be used.
b) Internal fragmentation - Memory block assigned to process is bigger. Some portion of
memory is left unused as it cannot be used by another process.
External fragmentation can be reduced by compaction or shuffle memory contents to place
all free memory together in one large block. To make compaction feasible, relocation should
be dynamic.
The internal fragmentation can be reduced by effectively assigning the smallest partition but
large enough for the process.

4. Memory Allocation
Main memory usually has two partitions:
Low Memory -- Operating system resides in this memory.
High Memory -- User processes then held in high memory.

Memory Management Page 79 of 157


OPERATING SYSTEMS
a) Operating system uses the following memory allocation mechanism:
i) Single-partition allocation - In this type of allocation, relocation-register scheme is used to
protect user processes from each other, and from changing operating-system code and data.
Relocation register contains value of smallest physical address whereas limit register
contains range of logical addresses. Each logical address must be less than the limit register.
ii) Multiple partition allocation - In this type of allocation, main memory is divided into a
number of fixed-sized partitions where each partition should contain only one process. When
a partition is free, a process is selected from the input queue and is loaded into the free
partition. When the process terminates, the partition becomes available for another process.

b) Memory Management strategies


There are two Memory Management Techniques: Contiguous, and Non-Contiguous. In
Contiguous Technique, executing process must be loaded entirely in main-memory. Can be
divided into:

i) Dynamic (or Variable) Size allocation - Initially RAM is empty and partitions are made
during the run-time according to process’s need instead of partitioning during system
configure. The size of partition will be equal to incoming process. The partition size varies
according to the need of the process so that the internal fragmentation can be avoided to
ensure efficient utilization of RAM. Number of partitions in RAM is not fixed and depends
on the number of incoming process and Main Memory’s size.

Advantages of Variable Partitioning –


1. No Internal Fragmentation: In variable Partitioning, space in main memory is allocated
strictly according to the need of process, hence there is no case of internal fragmentation. There will
be no unused space left in the partition.
2. No restriction on Degree of Multiprogramming: More number of processes can be
accommodated due to absence of internal fragmentation. A process can be loaded until the memory
is empty.
3. No Limitation on the size of the process: In variable partitioning, the process size can’t be
restricted since the partition size is decided according to the process size.

Disadvantages of Variable Partitioning –

1. Difficult Implementation: Implementing variable Partitioning is difficult as compared to


Fixed Partitioning as it involves allocation of memory during run-time rather than during system
configure.
2. External Fragmentation: There will be external fragmentation in spite of absence of
internal fragmentation. The empty space in memory cannot be allocated as no spanning is allowed in
contiguous allocation. The rule says that process must be contiguously present in main memory to
get executed. Hence it results in External Fragmentation.

Memory Management Page 80 of 157


OPERATING SYSTEMS
ii) Fixed (or static) size allocation - This is the oldest and simplest technique used to put
more than one processes in the main memory. In this partitioning, number of partitions (non-
overlapping) in RAM are fixed but size of each partition may or may not be same. As it is
contiguous allocation, hence no spanning is allowed. Here partition are made before
execution or during system configure.
Advantages of Fixed Partitioning –
1. Easy to implement: Algorithms needed to implement Fixed Partitioning are easy to
implement. It simply requires putting a process into certain partition without focusing on the
emergence of Internal and External Fragmentation.
2. Little OS overhead: Processing of Fixed Partitioning require lesser excess and indirect
computational power.
Disadvantages of Fixed Partitioning –
1. Internal Fragmentation: Main memory use is inefficient. Any program, no matter how
small, occupies an entire partition. This can cause internal fragmentation.
2. External Fragmentation: The total unused space (as stated above) of various partitions
cannot be used to load the processes even though there is space available but not in the contiguous
form (as spanning is not allowed).
3. Limit process size: Process of size greater than size of partition in Main Memory cannot be
accommodated. Partition size cannot be varied according to the size of incoming process’s size.

c) Placement policies
While various different strategies are used to allocate space to processes competing for
memory, three of the most popular are Best fit, Worst fit, and First fit. Each of these
strategies are described below [Nutt 1997]:
Best fit: The allocator places a process in the smallest block of unallocated memory in which
it will fit. For example, suppose a process requests 12KB of memory and the memory
manager currently has a list of unallocated blocks of 6KB, 14KB, 19KB, 11KB, and 13KB
blocks. The best-fit strategy will allocate 12KB of the 13KB block to the process.

Worst fit: The memory manager places a process in the largest block of unallocated memory
available. The idea is that this placement will create the largest hold after the allocations, thus
increasing the possibility that, compared to best fit, another process can use the remaining
space. Using the same example as above, worst fit will allocate 12KB of the 19KB block to
the process, leaving a 7KB block for future use.

First fit: There may be many holes in the memory, so the operating system, to reduce the
amount of time it spends analyzing the available spaces, begins at the start of primary
Memory Management Page 81 of 157
OPERATING SYSTEMS
memory and allocates memory from the first hole it encounters large enough to satisfy the
request. Using the same example as above, first fit will allocate 12KB of the 14KB block to
the process.

Main memory Best Fit Worst Fit First Fit

Notice in the diagram above that the Best fit and First fit strategies both leave a tiny segment
of memory unallocated just beyond the new process. Since the amount of memory is small, it
is not likely that any new processes can be loaded here. This condition of splitting primary
memory into segments as the memory is allocated and deallocated is known as
fragmentation. The Worst fit strategy attempts to reduce the problem of fragmentation by
allocating the largest fragments to new processes. Thus, a larger amount of space will be left
as seen in the diagram above.
In summary:
i) Best fit - Search the whole list on each allocation. Choose the smallest block that can
satisfy request. Tends to leave very large holes and very small holes.
Advantages: Memory Efficient. The operating system allocates the job minimum possible
space in the memory, making memory management very efficient.
Disadvantage: It is a Slow Process. Checking the whole memory for each job makes the
working of the operating system very slow. It takes a lot of time to complete the work.
ii) First fit - Choose first block that can satisfy request. Tends to leave “average” size holes
Advantage: It is fast in processing. As the processor allocates the nearest available memory
partition to the job, it is very fast in execution.
Disadvantages: It wastes a lot of memory. The processor ignores if the size of partition
allocated to the job is very large as compared to the size of job or not.

Memory Management Page 82 of 157


OPERATING SYSTEMS
iii) Worst fit - Choose largest block. Simulation shows that worst fit is worst in terms of
storage utilization

5. Memory Allocation Techniques


a) Swapping
Swapping is a mechanism in which a process can be swapped temporarily out of main
memory to a backing store, and then brought back into memory for continued execution.
Backing store is a usually a hard disk drive or any other secondary storage which fast in
access and large enough to accommodate copies of all memory images for all users. It must
be capable of providing direct access to these memory images. Swap Out is the method of
removing a process from the RAM and adding it to the Hard Disk. On the other hand, swap
in means, removing the program from the hard disk and placing it back into the main
memory or the RAM.
Major time-consuming part of swapping is transfer time. Total transfer time is directly
proportional to the amount of memory swapped. Let us assume that the user process is of size
100KB and the backing store is a standard hard disk with transfer rate of 1 MB per second.
The actual transfer of the 100K process to or from memory will take
100KB / 1000KB per second
= 1/10 second
= 100 milliseconds

Advantages of Swapping
1. The process helps the CPU to manage multiple processes within the same main memory.
2. The method helps to create and use Virtual Memory.
3. The method is economical.
4. Swapping makes a CPU perform several tasks simultaneously. Hence, processes do not have
to wait for too long before they are executed.
Disadvantages of Swapping
Memory Management Page 83 of 157
OPERATING SYSTEMS
The overall method depends heavily on the virtual memory. Such dependability results in a
significant performance drop.
1. In the case of heavy swapping activity, if the computer system loses power, the user might
lose all the information related to the program.
2. If the swapping algorithm is not good, the overall method can increase the number of page
faults and decline the overall processing performance.
3. Inefficiency may arise in the case where a resources or a variable is commonly used by the
processes which are participating in the swapping process.

b) Paging
Paging is a memory-management scheme that permits the physical address space of a process
to be noncontiguous or in other words eliminates the need for contiguous allocation of
physical memory. External fragmentation is avoided by using paging technique. Paging is a
technique in which physical memory is broken into blocks of the same size called pages (size
is power of 2, between 512 bytes and 8192 bytes). When a process is to be executed, it's
corresponding pages are loaded into any available memory frames.
Logical address space of a process can be non-contiguous and a process is allocated physical
memory whenever the free memory frame is available. Operating system keeps track of all
free frames. Operating system needs n free frames to run a program of size n pages.
NOTE:
Frames are basically the sliced up physical memory blocks of equal size. Example: 512kb of
memory can be divided into 4 parts for 128kb each
While, pages are sliced up logical memory blocks of equal size. While solving any problem
always the page size should be equal to frame size.

Address generated by CPU is divided into:


Page number (p) -- page number is used as an index into a page table which contains base
address of each page in physical memory.
Page offset (d) -- page offset is combined with base address to define the physical memory
address.

Memory Management Page 84 of 157


OPERATING SYSTEMS

Following figure show the paging table architecture:

Advantages of Paging Scheme –


1. No external fragmentation.
2. User’s views of memory and actual physical memory are separated. The user view memory
as a single contiguous space that contains only one process.
3. Efficient use of main memory.
4. Paging is simple to implement.
5. Due to equal size of the pages and frames, swapping becomes very easy
Disadvantages of Paging Scheme
1. Suffer from internal fragmentation.
2. Page table requires extra memory space, so may not be good for a system having
small RAM.

c) Segmentation
Segmentation is a technique to break memory into logical pieces where each piece represents
a group of related information. For example, data segments or code segment for each
process, data segment for operating system and so on. Segmentation can be implemented
using or without using paging.
Unlike paging, segment are having varying sizes and thus eliminates internal fragmentation.
External fragmentation still exists but to lesser extent.

Memory Management Page 85 of 157


OPERATING SYSTEMS

Address generated by CPU is divided into:


Segment number (s) -- segment number is used as an index into a segment table which
contains base address of each segment in physical memory and a limit of segment.
Segment offset (o) -- segment offset is first checked against limit and then is combined with
base address to define the physical memory address.

There are
following types of segmentation –
 Virtual Memory Segmentation – Each processes is segmented into n divisions, however,
they are not segmented all at once.
 Simple Segmentation – Each processes is segmented into n divisions, they are all together
segmented at once exactly at runtime. While, they may be scattered in the memory i.e. can be non-
contiguous.

Advantages

Memory Management Page 86 of 157


OPERATING SYSTEMS
1. The segment table is used to keep the record of segments and segment table occupies less
space as compared to the paging table.
2. No internal fragmentation
3. Segmentation provides a powerful memory management mechanism.
4. It allows programmers to partition their programs into modules that operate independently of
one another.
5. Segments allow two processes to easily share data.
6. It allows to extend the address ability of a processor i.e. segmentation allows the use of 16 bit
registers to give an addressing capability of 1 MB.
7. Segmentation makes it possible to separate the memory areas for stack, code and data.
8. It is possible to increase the memory size of code data or stack segments beyond 64 KB by
allotting more than one segment for each area.
Disadvantages
1. Due to segments external fragmentation occurs and external fragmentation results in a lot of
memory waste.
2. Costly memory management algorithms.
3. Segments of unequal size not suited for swapping.

 Difference between paging and segmentation:

Memory Management Page 87 of 157


OPERATING SYSTEMS

6. a) Virtual Memory.
 Virtual Memory is a storage allocation scheme in which secondary memory can be addressed
as though it were part of main memory. The addresses a program may use to reference memory are
distinguished from the addresses the memory system uses to identify physical storage sites, and
program generated addresses are translated automatically to the corresponding machine addresses.
The size of virtual storage is limited by the addressing scheme of the computer system and amount
of secondary memory is available not by the actual number of the main storage locations.
 A computer can address more memory than the amount physically installed on the system.
This extra memory is actually called virtual memory and it is a section of a hard disk that's set up to
emulate the computer's RAM.
The main visible advantage of this scheme is that programs can be larger than physical
memory. Virtual memory serves two purposes. First, it allows us to extend the use of
physical memory by using disk. Second, it allows us to have memory protection, because
each virtual address is translated to a physical address.
Following are the situations, when entire program is not required to be loaded fully in main
memory hence use of virtual memory.
 User written error handling routines are used only when an error occurred in the data or
computation.
 Certain options and features of a program may be used rarely.
 Many tables are assigned a fixed amount of address space even though only a small amount
of the table is actually used.
 The ability to execute a program that is only partially in memory would counter many
benefits.
 Less number of I/O would be needed to load or swap each user program into memory.
 A program would no longer be constrained by the amount of physical memory that is
available.
 Each user program could take less physical memory, more programs could be run the same
time, with a corresponding increase in CPU utilization and throughput.

Benefits of having Virtual Memory:

1. Large programs can be written, as virtual space available is huge compared to physical
memory.
2. Less I/O required, leads to faster and easy swapping of processes.
3. More physical memory available, as programs are stored on virtual memory, so they occupy
very less space on actual physical memory.

b) Overlays

Memory Management Page 88 of 157


OPERATING SYSTEMS
Overlay is a technique to run a program that is bigger than the size of the physical memory
by keeping only those instructions and data that are needed at any given time. Divide the
program into modules in such a way that not all modules need to be in the memory at the
same time.
overlays concept says that whatever part you required, you load it an once the part is done,
then you just unload it, means just pull it back and get the new part you required and run it.
Formally, “The process of transferring a block of program code or other data into internal
memory, replacing what is already stored”.
Sometimes it happens that compare to the size of the biggest partition, the size of the
program will be even more, then, in that case, you should go with overlays.
Advantage –
1. Reduce memory requirement
2. Reduce time requirement
Disadvantage –
1. Overlap map must be specified by programmer
2. Programmer must know memory requirement
3. Overlapped module must be completely disjoint
4. Programming design of overlays structure is complex and not possible in all cases

Example –
The best example of overlays is assembler. Consider the assembler has 2 passes, 2 pass
means at any time it will be doing only one thing, either the 1st pass or the 2nd pass. Which
means it will finish 1st pass first and then 2nd pass. Let assume that available main memory
size is 150KB and total code size is 200KB.
Pass 1.......................70KB
Pass 2.......................80KB
Symbol table.................30KB
Common routine...............20KB

As the total code size is 200KB and main memory size is 150KB, it is not possible to use 2
passes together. So, in this case, we should go with the overlays technique. According to the
overlays concept at any time only one pass will be used and both the passes always need
symbol table and common routine. Now the question is if overlays-driver* is 10KB, then
what is the minimum partition size required? For pass 1 total memory needed is = (70KB +
30KB + 20KB + 10KB) = 130KB and for pass 2 total memory needed is = (80KB + 30KB +
20KB + 10KB) = 140KB.So if we have minimum 140KB size partition then we can run this
code very easily.
Overlays driver: - It is the user responsibility to take care of overlaying, the operating
system will not provide anything. Which means the user should write even what part is
Memory Management Page 89 of 157
OPERATING SYSTEMS
required in the 1st pass and once the 1st pass is over, the user should write the code to pull
out the pass 1 and load the pass 2.That is what is the responsibility of the user known as the
Overlays driver. Overlays driver will just help us to move out and move in the various part of
the code.

c) Associative memory
A type of computer memory from which items may be retrieved by matching some part of
their content, rather than by specifying their address (hence also called associative
storage or Content-addressable memory (CAM).
Associative memory is found on a computer hard drive and used only in specific high-speed
searching applications. CAM works through the computer user providing a data word and
then searching throughout the entire computer memory to see if the word is there. If the
computer finds the data word then it offers a list of all of the storage addresses where the
word was found for the user.
NOTE:
Regular memory is a set of storage locations that are accessed through an address. Think
of it like the houses on your street. If you wanted to send a package or letter to your
neighbor, you would send it to their address, and it would get stored at their house. Simple,
right?
Associative memory is also a set of storage locations, but they work a little differently.
Instead of looking up a storage location by its address, it looks up a storage location by its
contents. So if you wanted to send that same package or letter to your neighbor, you would
send it to the house where your neighbor is actually located, and it would get stored there.

Memory Management Page 90 of 157


OPERATING SYSTEMS
7. Demand Paging
a) A demand paging system is where processes reside in secondary memory and pages are
loaded only on demand, not in advance. When a context switch occurs, the operating system
does not copy any of the old program’s pages out to the disk or any of the new program’s
pages into the main memory Instead, it just begins executing the new program after loading
the first page and fetches that program’s pages as they are referenced.

While executing a program, if the program references a page which is not available in the
main memory because it was swapped out a little ago, the processor treats this invalid
memory reference as a page fault and transfers control from the program to the operating
system to demand the page back into the memory.
Advantage:
1. Only loads pages that are demanded by the executing process.
2. As there is more space in main memory, more processes can be loaded reducing context
switching time which utilizes large amounts of resources.
3. Less loading latency occurs at program startup, as less information is accessed from
secondary storage and less information is brought into main memory.
4. Does not need extra hardware support than what paging needs, since protection fault can be
used to get page fault.
5. Large virtual memory.
6. More efficient use of memory.
7. There is no limit on degree of multiprogramming.

Disadvantage:
1. Individual programs face extra latency when they access a page for the first time. So demand
paging may have lower performance.
2. Programs running on low-cost, low-power embedded systems may not have a memory
management unit that supports page replacement.
Memory Management Page 91 of 157
OPERATING SYSTEMS
3. Memory management with page replacement algorithms becomes slightly more complex.
4. Possible security risks, including vulnerability to timing attacks

b) Recovery from a page fault


When a page fault occurs these steps are followed by the operating system and the required
page is brought into memory:

Memory Management Page 92 of 157


OPERATING SYSTEMS

1. The memory address which is requested by the process is first checked, to verify the request
made by the process. OR (If CPU try to refer a page that is currently not available in the main
memory, it generates an interrupt indicating memory access fault.)
2. If it’s found to be invalid, the process is terminated. OR (The OS puts the interrupted process
in a blocking state.)
3. In case the request by the process is valid, a free frame is located, possibly from a free-frame
list, where the required page will be moved. OR (The OS will search for the required page in the
logical address space.)
4. A new operation is scheduled to move the necessary page from disk to the specified memory
location. (This will usually block the process on an I/O wait, allowing some other process to use the
CPU in the meantime.) OR (The required page will be brought from logical address space to
physical address space. The page replacement algorithms are used for the decision making of
replacing the page in physical address space.)
5. When the I/O operation is complete, the process's page table is updated with the new frame
number, and the invalid bit is changed to valid. OR (The page table will updated accordingly.)
6. The instruction that caused the page fault must now be restarted from the beginning. OR (The
signal will be sent to the CPU to continue the program execution and it will place the process back
into ready state.)
NOTE:
 Thrashing - A state in which the system spends most of its time swapping process pieces
rather than executing instructions.
To avoid this, the operating system tries to guess, based on recent history, which pieces are
least likely to be used in the near future
Recovery from Thrashing:
Memory Management Page 93 of 157
OPERATING SYSTEMS
 Do not allow the system to go into thrashing by instructing the long term scheduler not to
bring the processes into memory after the threshold.
 If the system is already in thrashing then instruct the mid-term scheduler to suspend some of
the processes so that we can recover the system from thrashing.

Memory Management Page 94 of 157


OPERATING SYSTEMS
 Dynamic Binding –
Binding association of a ‘function definition’ to a ‘function call’ or an association of a
‘value’ to a ‘variable’, is called ‘binding’. During compilation, every ‘function definition’ is
given a memory address; as soon as function calling is done, control of program execution
moves to that memory address and get the function code stored at that location executed, this
is Binding of ‘function call’ to ‘function definition’. Binding can be classified as ‘static
binding’ and ‘dynamic binding’.
If it’s already known before runtime, which function will be invoked or what value is
allotted to a variable, then it is a ‘static binding’. If it comes to know at the runtime, then it
is called ‘dynamic binding’.

 Memory hierarchy

Registers - Usually, the


register is a static RAM or SRAM in the processor of the computer which is used for holding
the data word which is typically 64 or 128 bits. The program counter register is the most
Memory Management Page 95 of 157
OPERATING SYSTEMS
important as well as found in all the processors. Most of the processors use a status word
register as well as an accumulator. A status word register is used for decision making, and the
accumulator is used to store the data like mathematical operation.
Cache Memory - Cache memory can also be found in the processor, however rarely it may
be another IC (integrated circuit) which is separated into levels. The cache holds the chunk
of data which are frequently used from main memory.
Main Memory - The main memory in the computer is nothing but, the memory unit in the
CPU that communicates directly. It is the main storage unit of the computer. This memory is
fast as well as large memory used for storing the data throughout the operations of the
computer. This memory is made up of RAM as well as ROM.
Magnetic Disks - The magnetic disks in the computer are circular plates fabricated of plastic
otherwise metal by magnetized material. Frequently, two faces of the disk are utilized as well
as many disks may be stacked on one spindle by read or write heads obtainable on every
plane. All the disks in computer turn jointly at high speed. The tracks in the computer are
nothing but bits which are stored within the magnetized plane in spots next to concentric
circles. These are usually separated into sections which are named as sectors.
Magnetic Tape - This tape is a normal magnetic recording which is designed with a slender
magnetizable covering on an extended, plastic film of the thin strip. This is mainly used to
back up huge data. Whenever the computer requires to access a strip, first it will mount to
access the data. Once the data is allowed, then it will be unmounted. The access time of
memory will be slower within magnetic strip as well as it will take a few minutes for
accessing a strip.

Advantages of Memory Hierarchy


 Memory distributing is simple and economical
 Removes external destruction
 Data can be spread all over
 Permits demand paging & pre-paging
 Swapping will be more proficient

Memory Management Page 96 of 157


CHAPTER FOUR

DEVICE AND I/O MANAGEMENT

 Management of I/O devices is a very important part of the operating system - so important
and so varied that entire I/O subsystems are devoted to its operation. Consider the range of
devices on a modern computer, from mice, keyboards, disk drives, display adapters, USB
devices, network connections, audio I/O, printers, special devices for the handicapped, and
many special-purpose peripherals.
 I/O Subsystems must contend with two (sometimes conflicting) trends:

(i) The gravitation towards standard interfaces for a wide range of devices.
(ii) the development of entirely new types of devices, for which the existing
standard interfaces are not always easy to apply.

 The three major jobs of a computer are Input, Output, and Processing. The primary role of
the operating system in computer Input / Output is to manage and organize I/O operations
and all I/O devices.
Objectives of Device and I/O management
1. Generality and Device Independence:

I/O devices are typically quite complex mechanically and electronically. Much of this
complexity is related to the electronic engineering and is of no interest to the user or
the programmer. The average user is not aware of the complexities of positioning the
heads on a disk drive, reading the signal from the disk surface, waiting for the
required sector to rotate into position etc.

2. Efficiency:
Perhaps the most significant characteristic of the I/O system is the speed disparity
between it and the processor. I/O devices involve mechanical operations. They cannot
compete with the microsecond or nanosecond speed of the processor and memory.
The I/O management module must try to minimize the disparity by the use of
techniques like buffering and spooling.
3. Character code Independence

4. Uniform treatment of devices


In summary, the Objective of I/O Device management include:
1. To provide a uniform and simple view of I/O (hide complexity of device handling).

Carolyn 97
2. To provide device independence from:
(i) Device type- terminal disk or tape
(ii) Device instance -which terminal, which tape e.t.c
3. Fair access to shared devices
4. Smooth allocation of dedicated devices
5. Ability to exploit parallelism of I/O devices for multiprogramming
Devices can however vary considerably in:
- Differences in speed-synchronous or asynchronous
- Unit of data transfer – character or block
- Character codes – are they all the same? Does everyone use the same representation?
- Operations supported – read, write, seek, print etc
- Error conditions
- Sharable or is it only for a single user e.g. printer, tape, kettle
Principles of Device(I/O) hardware
I/O Devices
A device or a peripheral or a peripheral device is a component used to put information into
and get information out of the computer.

Several categories of peripheral devices may be identified, based on their relationship with the
computer:

An input device sends data or instructions to the computer, such as a

 Mouse,
 Keyboard,
 Graphics Tablet,
 Image Scanner,
 Barcode Reader,
 Game Controller,
 Light Pen,
 Light Gun,
 Microphone,
 Digital Camera,
 Webcam,
 Dance Pad,
 Read-Only Memory

An output device provides output from the computer, such as a

 Computer Monitor,
 Projector,

Carolyn 98
 Printer,
 Headphones
 Computer Speaker

An input/output device performs both input and output functions, such as a computer data
storage device (including:

 Disk Drive
 USB Flash Drive
 Memory Card
 Tape Drive.

Many modern electronic devices, such as Internet-enabled digital watches, keyboards, and tablet
computers, have interfaces for use as computer peripheral devices.

All devices in a computer are connected through a bus system.

See diagram below.

Device Controllers
The Device Controller works like an interface between a device and a device driver. I/O units
typically consist of a mechanical component and an electronic component where electronic
component is called the device controller.

Carolyn 99
A device controller is a system that handles the incoming and outgoing signals of the CPU. A
device is connected to the computer via a plug and socket, and the socket is connected to a
device controller. Device controllers use binary and digital codes. An IO device contains
mechanical and electrical parts. A device controller is the electrical part of the IO device.

There is always a device controller and a device driver for each device to communicate with the
Operating Systems.

A device controller may be able to handle multiple devices. As an interface its main task is to
convert serial bit stream to block of bytes, perform error correction as necessary.

Any device connected to the computer is connected by a plug and socket, and the socket is
connected to a device controller.

The following is a model for connecting the CPU, memory, controllers, and I/O devices where
CPU and device controllers all use a common bus for communication.

Communication with I/O Devices (I/O Operation Techniques)


I/O operation deals with the exchanges of data between the memory and the external devices
either in the direction to the memory (READ) or in the direction from the memory (WRITE).

There are three approaches/techniques available to communicate with the CPU and Device.

 Programmed I/O
 Interrupt driven I/O
 Direct memory access (DMA)

1. Programmed I/O

Carolyn 100
Data are exchanged between the processor and the I/O module. The processor executes a
program that gives it direct control of the I/O operation, including sensing device status, sending
a read or write command, and transferring the data. When the processor issues a command to the
I/O module, it must wait until the I/O operation is complete. If the processor is faster than the I/O
module, this is wasteful of processor time. The overall operation of the programmed I/O can be
summaries as follow:

1. The processor is executing a program and encounters an instruction relating to I/O


operation.
2. The processor then executes that instruction by issuing a command to the appropriate I/O
module.
3. The I/O module will perform the requested action based on the I/O command issued by
the processor (READ/WRITE) and set the appropriate bits in the I/O status register.
4. The processor will periodically check the status of the I/O module until it finds that the
operation is complete.

Programmed I/O Mode Input Data


Transfer

1. Each input is read after first testing whether the device is ready with the input (a
state reflected by a bit in a status register).
2. The program waits for the ready status by repeatedly testing the status bit and till
all targeted bytes are read from the input device.
3. The program is in busy (non-waiting) state only after the device gets ready else in
wait state.

Carolyn 101
Programmed I/O Mode Output Data Transfer

1. Each output written after first testing whether the device is ready to accept the byte at its
output register or output buffer is empty.
2. The program waits for the ready status by repeatedly testing the status bit(s) and till all the
targeted bytes are written to the device.
3. The program in busy (non-waiting) state only after the device gets ready else wait state.

I/O
Commands
To execute an I/O-related instruction, the processor issues an address, specifying the particular
I/O module and external device, and an I/O command. There are four types of I/O commands
that an I/O module may receive when it is addressed by a processor:

 Control: Used to activate a peripheral and tell it what to do. For example, a magnetic-tape
unit may be instructed to rewind or to move forward one record. These commands are
tailored to the particular type of peripheral device.
 Test: Used to test various status conditions associated with an I/O module and its peripherals.
The processor will want to know that the peripheral of interest is powered on and available
for use. It will also want to know if the most recent I/O operation is completed and if any
errors occurred.
 Read: Causes the I/O module to obtain an item of data from the peripheral and place it in an
internal buffer. The processor can then obtain the data item by requesting that the I/O module
place it on the data bus.
 Write: Causes the I/O module to take an item of data (byte or word) from the data bus and
subsequently transmit that data item to the peripheral.

I/O Instruction
 Each device is given a unique identifier or address.

Carolyn 102
 When the processor issues an I/O command, the command contains the address of the
desired device.
 Thus, the I/O module must interpret the address lines to determine if the command is for
itself and also which external devices that the address refer to.
When the processor, main memory shares a common bus, two modes of addressing are
possible:

(a) Memory mapped I/O


(b) Isolated I/O

(a) Memory mapped I/O


There is a single address space for memory locations and I/O devices and the processor
treats the status and data registers of I/O modules as memory locations and uses the
same machine instructions to access both memory and I/O devices. So, for example,
with 10 address lines, a combined total of = 1024 memory locations and I/O addresses
can be supported, in any combination. With memory-mapped I/O, a single read line and
a single write line are needed on the bus.
(b) Isolated I/O
The bus may be equipped with memory read and write plus input and output command
lines. Now, the command line specifies whether the address refers to a memory location
or an I/O device. The full range of addresses may be available for both. Again, with 10
address lines, the system may now support both 1024 memory locations and 1024 I/O
addresses.

Advantages and Disadvantages of Programmed I/O

- simple to implement
Advantages - very little hardware support
Disadvantages - busy waiting
- ties up CPU for long period with no useful work

Carolyn 103
2. Interrupt initiated/Driven I/O

Interrupt I/O is a way of controlling


input/output activity whereby a
peripheral or terminal that needs to
make or receive a data transfer sends a
signal.
This will cause a program interrupt to
be set at a time appropriate to the
priority level of the I/O interrupt.
Relative to the total interrupt system,
the processors enter an interrupt
service routine.

Simple Interrupt Processing

1. CPU issues read command.


2. I/O module gets data from peripheral whilst CPU does other work.
3. I/O module interrupts CPU.
4. CPU requests data.
5. I/O module transfers data.

Interrupt Processing

Carolyn 104
1. A device driver initiates an I/O
request on behalf of a process.
2. The device driver signals the I/O
controller for the proper device,
which initiates the requested I/O.
3. The device signals the I/O
controller that is ready to retrieve
input, the output is complete or
that an error has been generated.
4. The CPU receives the interrupt
signal on the interrupt-request line
and transfer control over the
interrupt handler routine.
5. The interrupt handler determines
the cause of the interrupt, performs
the necessary processing and
executes a “return from” interrupt
instruction.
6. The CPU returns to the execution
state prior to the interrupt being
signaled.
7. The CPU continues processing
until the cycle begins again.

Advantages & Disadvantages of Interrupt Drive I/O


Advantages  Fast
 Efficient
Disadvantages  can be tricky to write if using a low-level language
 can be tough to get various pieces to work well together
 usually done by the hardware manufacturer / OS maker, e.g.
Microsoft

3. Direct Memory Access (DMA)

 Direct Memory Access is a technique for transferring data within main memory and
external device without passing it through the CPU.
 DMA is a way to improve processor activity and I/O transfer rate by taking-over the
job of transferring data from processor, and letting the processor to do other tasks.
 This technique overcomes the drawbacks of other two I/O techniques which are the
time-consuming process when issuing command for data transfer and tie-up the

Carolyn 105
processor in data transfer while the data processing is neglected.
 It is more efficient to use DMA method when large volume of data has to be
transferred.
 For DMA to be implemented, processor has to share its’ system bus with the DMA
module. Therefore, the DMA module must use the bus only when the processor does
not need it, or it must force the processor to suspend operation temporarily.
 The latter technique is more common to be used and it is referred to as cycle
stealing.
Figure 5 shows an add-on DMA module cycles in an Instruction Cycle.

Figure 5: DMA and Interrupt Breakpoints during an Instruction Cycle

Basic Operation of DMA


When the processor wishes read or send a block of data, it issues a command to the DMA
module by sending some information to DMA module. The information includes:

 read or write command,


sending through read and
write control lines.
 number of words to be read
or written, communicated on
the data lines and stored in
the data count register.
 starting location in memory
to read from or write to,
communicated on data lines
and stored in the address
register.
 address of the I/O device
involved, communicated on
the data lines.

Carolyn 106
Typical DMA Block Diagram
After the information are sent, the processor continues with other work. The DMA module
then transfers the entire block of data directly to or from memory without going through the
processor. When the transfer is complete, the DMA module sends an interrupt signal to the
processor to inform that it has finish using the system bus.

Configuration of DMA
DMA mechanism can be configured in a variety of ways, which are:

(i) Single-bus, detached DMA


(ii) Single-bus, integrated DMA-I/O
(iii) I/O bus

(i) Single-bus, detached DMA


All modules share the same system bus. The DMA module is acting as a surrogate
processor, which uses programmed I/O to exchange data between memory and an I/O
module through the DMA module. This configuration is inexpensive, but is inefficient.
This is because each transfer of a word consumes two bus cycles.

Single-bus, detached DMA

(ii) Single-bus, integrated DMA


In this configuration, there is a path between the DMA module and one or more I/O
module that does not include the system bus. The DMA logic can be a part of an I/O
module, or a separate module that controls one or more I/O modules. Therefore, the
number of required bus cycles can be cut substantially. The system bus that the DMA
module shares with the processor and memory is used by the DMA module only to
exchange data with memory. The exchange of data between the DMA and I/O modules
takes place off the system bus.

Carolyn 107
Single-bus, integrated DMA
(iii) I/O bus
In this configuration, the concept is further improved from the previous configuration,
which is single-bus, integrated DMA. I/O modules are connected to the DMA module
using an I/O bus. This can reduce the number of I/O interfaces in the DMA module to
one and provides for an easily expandable configuration. The system bus that the DMA
module shares with the processor and memory is used by the DMA module only to
exchange data with memory. The exchange of data between the DMA and I/O modules
takes place off the system bus.

I/O bus

Summary Flows of Using DMA by Processor


1. Issue command to DMA module,
by sending necessary information
to DMA module.
2. Processor does other work.
3. DMA acquire control on the
system, and transfers data to and
from within memory and external
device.
4. DMA sends a signal to processor
when the transfer is complete,
system control is return to
processor.

Advantages & Disadvantages of DMA


 allows a peripheral device to read from/write to memory without
Advantages going through the CPU
 allows for faster processing since the processor can be working on
something else while the peripheral can be populating memory

Disadvantages  requires a DMA controller to carry out the operation, which increases
the cost of the system
 cache coherence problems

Carolyn 108
Carolyn 109
Principles of I/O Software

I/O Software is used for interaction with I/O devices like mouse, keyboard, USB devices,
printers, etc. I/O software are organized in following ways (layers):

1. User Level Libraries– Provides a simple interface to program for input output
functions.
2. Kernel Level Modules– Provides device driver to interact with the device independent
I/O modules and device controller.
3. Hardware-A layer including hardware controller and actual hardware which interact
with device drivers.

Goals of I/O Software

(i) Uniform naming: For example, naming of files systems in Operating Systems is
done in a way that user does not have to be aware of underlying hardware name.
(ii) Synchronous versus Asynchronous: When the CPU is working on some process it
goes in the block state when the interrupt occurs. If the I/O operation are in blocking
state it is much easier to write the I/O operation. It is always the operating system
responsibility to create such an interrupt driven user program.
(iii) Device Independence: The most important part of I/O software is device
independence. It is always most preferable to write program which can open all other
I/O devices.
(iv) Buffering: Data that we enter into a system cannot be stored directly in memory.
Buffer have major impact on I/O software as it is the one which ultimately helps
storing the data and copying data.
(v) Error handling: Errors and mostly generated by controller and also, they are mostly

Carolyn 110
handled by controller itself. When lower level solves the problem, it does not reach
the upper level.
(vi) Shareable and Non-Shareable Devices: Devices like Hard Disk can be shared
among multiple process while devices like Printers cannot be shared. The goal of I/O
software is to handle both types of devices.

Device Drivers

A device driver is a special kind of software program that controls a specific hardware
device attached to a computer.
Device drivers are essential for a computer to work properly. These programs but provide
the all-important means for a computer to interact with hardware, for everything from
mouse, keyboard and display to working with networks, storage and graphics.

Device driver, is generally written by the device's manufacturer and delivered along with the
device on a CD-ROM.

A device driver performs the following jobs:

 To accept request from the device independent software above to it.


 Interact with the device controller to take and give I/O and perform required error
handling
 Making sure that the request is executed successfully

How a device driver handles a request is as follows:

 Suppose a request comes to read a block N.


 If the driver is idle at the time a request arrives, it starts carrying out the request
immediately.
 Otherwise, if the driver is already busy with some other request, it places the new
request in the queue of pending requests.

Types of Device Drivers

There are several kinds of device drivers, each handling a different kind of I/O. two main
ones include:

(i) Block device drivers manage devices with physically addressable storage media,
such as disks. All other devices are considered character devices.
(ii) Character device drivers are standard character device drivers and STREAMS
device drivers.

Classification of Drivers According to Functionality

There are numerous driver types, differing in their functionality. This subsection briefly

Carolyn 111
describes three of the most common driver types.

(i) Monolithic Drivers

Monolithic drivers are device drivers that embody all the functionality needed to support a
hardware device. A monolithic driver is accessed by one or more user applications, and
directly drives a hardware device.

Monolithic Drivers

(ii) Layered Drivers

Layered drivers are device drivers that are part of a stack of device drivers that together
process an I/O request. Layered drivers are sometimes also known as filter drivers, and are
supported in all operating systems including all Windows platforms and all Unix platforms.

Layered Drivers

(iii) Miniport Drivers

Carolyn 112
A Miniport driver is an add-on to a class driver that supports miniport drivers. It is used so
the miniport driver does not have to implement all of the functions required of a driver for
that class.
Figure 2.3 Miniport Drivers

Interrupt

An Interrupt is a signal sent to the CPU by external devices, normally I/O devices. It tells
the CPU to stop its current activities and execute the appropriate part of the operating
system.

There are three types of interrupts:

1. Hardware Interrupts are generated by hardware devices to signal that they need
some attention from the OS. They may have just received some data or they have
just completed a task which the operating system previous requested, such as
transferring data between the hard drive and memory.
2. Software Interrupts are generated by programs when they want to request a
system call to be performed by the operating system.
3. Traps are generated by the CPU itself to indicate that some error or condition
occurred for which assistance from the operating system is needed.

Interrupts are important because they give the user better control over the computer.
Without interrupts, a user may have to wait for a given application to have a higher priority
over the CPU to be ran. This ensures that the CPU will deal with the process immediately.

Carolyn 113
Interrupt Handlers

An interrupt handler, also known as an interrupt service routine or ISR, is a piece of


software or more specifically a callback function in an operating system or more specifically
in a device driver, whose execution is triggered by the reception of an interrupt.

When the interrupt happens, the interrupt procedure does whatever it has to in order to
handle the interrupt, updates data structures and wakes up process that was waiting for an
interrupt to happen.

The interrupt mechanism accepts an address ─ a number that selects a specific interrupt
handling routine/function from a small set.

In most architectures, this address is an offset stored in a table called the interrupt vector
table. This vector contains the memory addresses of specialized interrupt handlers.

Device-Independent I/O Software

The basic function of the device-independent software is to perform the I/O functions that
are common to all devices and to provide a uniform interface to the user-level software. The
following is a list of functions of device-independent I/O Software.

 Uniform interfacing for device drivers


 Device naming - Mnemonic names mapped to Major and Minor device numbers
 Device protection
 Providing a device-independent block size
 Buffering because data coming off a device cannot be stored in final destination.
 Storage allocation on block devices
 Allocation and releasing dedicated devices
 Error Reporting

User-Space I/O Software

These are the libraries which provide richer and simplified interface to access the
functionality of the kernel or ultimately interactive with the device drivers. Most of the user-
level I/O software consists of library procedures with some exception like spooling system
which is a way of dealing with dedicated I/O devices in a multiprogramming system.

Kernel I/O Subsystem

Kernel I/O Subsystem is responsible to provide many services related to I/O.

The following are some of the services provided.

Carolyn 114
 Scheduling − Kernel schedules a set of I/O requests to determine a good order in
which to execute them. When an application issues a blocking I/O system call, the
request is placed on the queue for that device. The Kernel I/O scheduler rearranges
the order of the queue to improve the overall system efficiency and the average
response time experienced by the applications.
 Buffering − Kernel I/O Subsystem maintains a memory area known as buffer that
stores data while they are transferred between two devices or between a device with
an application operation.
 Caching − Kernel maintains cache memory which is region of fast memory that
holds copies of data. Access to the cached copy is more efficient than access to the
original.
 Spooling and Device Reservation − A spool is a buffer that holds output for a
device, such as a printer, that cannot accept interleaved data streams. The spooling
 For input, the device interrupts the CPU when new data has arrived and is ready to be
retrieved by the system processor. The actual actions to perform depend on whether
the device uses I/O ports or memory mapping.
 For output, the device delivers an interrupt either when it is ready to accept new data
or to acknowledge a successful data transfer. Memory-mapped and DMA-capable
devices usually generate interrupts to tell the system they are done with the buffer.
 Here the CPU works on its given tasks continuously. When an input is available, such
as when someone types a key on the keyboard, then the CPU is interrupted from its
work to take care of the input data. The CPU can work continuously on a task without
checking the input devices, allowing the devices themselves to interrupt it as
necessary.
system copies the queued spool files to the printer one at a time.
 Error Handling − An operating system that uses protected memory can guard
against many kinds of hardware and application errors.

Carolyn 115
Disk and Disk Operations

Disk hardware:

-In diagrams

Carolyn 116
Carolyn 117
The Physical Parts of a Disk

A disk looks like this, conceptually:

Figure 1-1 A Disk

Magnetic Surface

Any disk has, as a critical part, a surface on which to record data. That surface is usually
magnetic, meaning that it is capable of storing a small amount of magnetism. Perhaps the most
remarkable aspect of disk technology is the number of distinct amounts of magnetism that can be
stored on a single surface of a disk.

Carolyn 118
Figure 1-2 Disk Surface

Bits

Each single magnetic entity is used by the computer as a binary symbol. Binary means "having
two and only two possible states" such as on or off, true or false, and so on. Each such entity is
called a bit, which is short for binary digit. Binary digits are represented in written

Figure 1-3 Bit

Byte

When eight bits are considered together, they are referred to as a byte. A single eight-bit byte is
the amount of computer storage typically used to store a single letter of the alphabet or other
symbol of human communication.

Figure 1-4 Byte

Block, Sector

The surface of a disk is divided into sections. This sectioning is not a physical marking on the
surface, but rather it is just an idea that the disk is so divided. These sections are called sectors or
blocks. The term sector is more common to personal computers.

Carolyn 119
Figure 1-5 Block

Cluster

With larger disk capacities, it is inefficient for the computer system to deal with millions of
individual blocks one at a time. The operating system's map of the disk's blocks is too big to be
useful unless single bits in the map can represent more than one disk block. Accordingly, disk
blocks are grouped into clusters, which are groups of blocks read and written as a unit. In other
words, a cluster is the minimum allocation quantity for a disk. The cluster size, in terms of
number of blocks per cluster, can be varied by reinitializing the disk.

Figure 1-6 Cluster

Tracks

The blocks and clusters of storage space are arranged in groups referred to as tracks. A single
track is one strip of disk space beginning at one point on the surface and continuing around in a
circle ending at the same point. The tracks are concentric rings, not a spiral like the grooves on a
phonograph record. Each surface has many tracks.

Figure 1-7 Tracks

Carolyn 120
Platters

A disk may consist of one or more platters, each of which may be recorded on both sides. The
platter spins like a phonograph record on a turntable.

Figure 1-8 Platters

Cylinder

The tracks at the same radius on each platter, taken together, are referred to as a cylinder. If you
visualized these tracks without any other part of the disk, they would form the shape of a hollow
cylinder.

Figure 1-9 Cylinder

Head

A head is a tiny magnetic device capable of reading or writing magnetic bits on the disk surface.
The platter spins near the head(s), so that a single track of recorded information is continuously
passing under the head, available for reading or writing. The head never touches the surface.
Rather, it floats on a cushion of air so thin that a human hair or even a particle of cigarette smoke
cannot pass between the head and the surface. As foreign particles that small would cause the
disk to fail, such disks are sealed in air-tight containers.

Carolyn 121
Figure 1-10 Head

Arms

Disk heads are mounted on arms that hold the heads close to the platter surface at precisely the
right point to read or write data. There may be one arm for each head, but on multiple-platter
disks a single arm may support two heads - one for the platter above the arm and one for the
platter below. Some disks mount all the heads on a group of arms that move in unison.

Figure 1-11 Arms

Spindle

A disk platter is attached to a spindle around which it rotates like a wheel on the axle of a car.
The spindle is at the exact center of the platter. The arm moves the head from the outer edge of
the platter toward the spindle at the center and back out again. Though most disks have only one
spindle, some complex disks are made up of two or more single-spindle disks treated as one
large disk. These are called multi-spindle disks. However, no platter ever has more than one
spindle.

Figure 1-12 Spindle


Figure 1-13 Electronics

Carolyn 122
Drive

The combination of one or more spindles, arms, heads, platters and electronics into a single
physical device for storing and retrieving data is known as a disk drive.

Figure 1-14 Disk Drive

Cable

The electronics in the disk drive are connected to circuitry in the computer by means of cables,
which are no more than wires with a certain type of connector on each end. Often, the individual
wires are color-coded for clarity.

Figure 1-15 Cable

Controller

The controller, which is attached to the computer, decodes instructions from the computer and
issues instructions to the disk drive to do what the computer has instructed. The controller also
receives data and status information from the disk drive, which it passes on to the computer in a
form the computer can understand. A single controller may service more than one disk drive.

Intelligent Disk Controller Functions

Disk controllers range in complexity from a very simple controller that merely relays instructions
and data, to an intelligent controller that uses its information about the status of the disk to help
the computer process data faster. Two examples of intelligent disk controller functions are seek-
ordering and data caching.

Carolyn 123
Seek Ordering

By keeping track of the exact position of the heads at all times, the controller can determine
which one of multiple requests from the computer can be serviced in the shortest time.

Figure 1-16 Seek Ordering

Data Caching

This local memory is called a cache and is used to store data recently retrieved from the disk by
the computer. Then, if the computer should happen to request exactly the same data again, the
controller can service the request from the local cache at memory speed (microseconds) instead
of at disk speed (milliseconds).

Disk Arm Scheduling Algorithms


A Simple Model of Disk Performance

The access time to read or write a disk section includes three components:

1. Seek time: the time to position heads over a cylinder (~8 msec on average).

Carolyn 124
2. Rotational delay: the time to wait for the target sector to rotate underneath the head.
Assuming a speed of 7,200 rotations per minute, or 120 rotations per second, each
rotation takes ~8 msec, and the average rotational delay is ~4 msec.

3. Transfer time: the time to transfer bytes. Assuming a peak bandwidth of 58


Mbytes/sec, transferring a disk block of 4 Kbytes takes 0.07 msec.

Thus, the overall time to perform a disk I/O = seek time + rotational delay + transfer time.

The sum of the seek time and the rotational delay is the disk latency, or the time to initiate a
transfer. The transfer rate is the disk bandwidth.

If a disk block is randomly placed on disk, then the disk access time is roughly 12 msec to fetch
4 Kbytes of data, or a bandwidth 340 Kbytes/sec.

If a disk block is randomly located on the same disk cylinder as the current disk arm position, the
access time is roughly 4 msec without the seek time, or a bandwidth of 1.4 Mbytes/sec.

If the next sector is on the same track, the access time is 58 Mbytes/sec without the seek time and
the rotational delay. Therefore, the key to using the hard drive effectively is to minimize the seek
time and rotational latency.

TYPES OF DISK SCHEDULING ALGORITHMS

1. First Come-First Serve (FCFS)


2. Shortest Seek Time First (SSTF)
3. Elevator (SCAN)
4. Circular SCAN (C-SCAN)
5. LOOK
6. C-LOOK

Given the following queue -- 95, 180, 34, 119, 11, 123, 62, 64 with the Read-write head initially
at the track 50 and the tail track being at 199 let us now discuss the different algorithms.

1. First Come -First Serve (FCFS)

Carolyn 125
All incoming requests are placed at the end of the queue. Whatever number that is next in the
queue will be the next number served. Using this algorithm does not provide the best results. To
determine the number of head movements you would simply find the number of tracks it took to
move from one request to the next. For this case it went from 50 to 95 to 180 and so on. From 50
to 95 it moved 45 tracks. If you tally up the total number of tracks you will find how many tracks
it had to go through before finishing the entire request. In this example, it had a total head
movement of 640 tracks. The disadvantage of this algorithm is noted by the oscillation from
track 50 to track 180 and then back to track 11 to 123 then to 64. As you will soon see, this is the
worse algorithm that one can use.

2. Shortest Seek Time First (SSTF)

In this case request is serviced according to next shortest distance. Starting at 50, the next
shortest distance would be 62 instead of 34 since it is only 12 tracks away from 62 and 16 tracks
away from 34. The process would continue until all the process are taken care of. Although this
seems to be a better service being that it moved a total of 236 tracks, this is not an optimal one.
There is a great chance that starvation would take place. The reason for this is if there were a lot
of requests close to each other the other requests will never be handled since the distance will
always be greater.

3. Elevator (SCAN)

Carolyn 126
This approach works like an elevator does. It scans down towards the nearest end and then when
it hits the bottom it scans up servicing the requests that it did not get going down. If a request
comes in after it has been scanned it will not be serviced until the process comes back down or
moves back up. This process moved a total of 230 tracks. Once again this is more optimal than
the previous algorithm, but it is not the best.

4. Circular Scan (C-SCAN)

Circular scanning works just like the elevator to some extent. It begins its scan toward the
nearest end and works it way all the way to the end of the system. Once it hits the bottom or top
it jumps to the other end and moves in the same direction. Keep in mind that the huge jump
doesn't count as a head movement. The total head movement for this algorithm is only 187 track,
but still this isn't the most sufficient.

5. C-LOOK

Carolyn 127
This is just an enhanced version of C-SCAN. In this the scanning doesn't go past the last request
in the direction that it is moving. It too jumps to the other end but not all the way to the end. Just
to the furthest request. C-SCAN had a total movement of 187 but this scan (C-LOOK) reduced it
down to 157 tracks.

From this you were able to see a scan change from 644 total head movements to just 157. You
should now have an understanding as to why your operating system truly relies on the type of
algorithm it needs when it is dealing with multiple processes.

NOTE:
It is important that you draw out the sequence when handling algorithms like this one. One
would have a hard time trying to determine which algorithm is best by just reading the definition.
There is a good chance that without the drawings there could be miscalculations.

RAM DISKS
A RAM drive (also called a RAM disk) is a block of RAM (primary storage or volatile
memory) that a computer's software is treating as if the memory were a disk drive (secondary
storage). It is sometimes referred to as a "virtual RAM drive" or "software RAM drive" to
distinguish it from a "hardware RAM drive" that uses separate hardware containing RAM, which
is a type of solid-state drive.
RAID: RAID is an acronym for Redundant array of Inexpensive Disks or Independent
Disks.

The basic idea of RAID was to combine multiple small, inexpensive disk drives into an
array of disk drives, which yields performance exceeding that of a Single Large Expensive
Drive (SLED). Additionally, this array of drives appears to the computer as a single logical
storage unit or drive.

Why use RAID?

Carolyn 128
 Typically, RAID is used in large file servers, transaction of application servers, where
data accessibility is critical, and fault tolerance is required.
 RAID is also being used in desktop systems for CAD, multimedia editing and playback
where higher transfer rates are needed.

In RAID:
• Physical disk drive set viewed as single logical unit
Preferable over few large-capacity disk drives
• Improved I/O performance
• Improved data recovery
Disk failure event
• Introduces redundancy
Helps with hardware failure recovery
• Significant factors in RAID level selection
Cost, speed, system’s applications
Increases hardware costs

Data being transferred in parallel from a Level 0 RAID configuration to a large-capacity disk.
The software in the controller ensures that the strips are stored in correct order.

Carolyn 129
RAID Summary

RAID 0 (LEVEL 0)
• Uses data striping (not considered true RAID)
No parity and error corrections
No error correction/redundancy/recovery
• Benefits
Devices appear as one logical unit
Best for large data quantity: non-critical data

RAID Level 0 with four disks in the array.


Strips 1, 2, 3, and 4 make up a stripe. Strips 5, 6, 7, and 8 make up another stripe, and so on.

RAID 1(LEVEL 1)
• Uses data striping (considered true RAID)

Carolyn 130
Mirrored configuration (backup)
Duplicate set of all data (expensive)
Provides redundancy
and improved reliability

RAID Level 1 with three disks in the main array and three corresponding disks in the backup
array, the mirrored array.

Raid 2(Level 2)
• Uses small stripes (considered true RAID)
• Hamming code: error detection and correction
• Expensive and complex
• Size of strip determines number of array disks

RAID Level 2. Seven disks are needed in the array to store a 4-bit data item, one for each bit and
three for the parity bits.
Each disk stores either a bit or a parity bit based on the Hamming code used for redundancy.

RAID 3 (LEVEL 3)
• It is a Modification of Level 2
Requires one disk for redundancy
One parity bit for each strip

Carolyn 131
RAID Level 3. A 4-bit data item is stored in the first four disks of the array.
The fifth disk is used to store the parity for the stored data item.

RAID 4 (LEVEL 4)
• Same strip scheme as Levels 0 and 1
Computes parity for each strip
Stores parities in corresponding strip
Has designated parity disk

RAID Level 4. The array contains four disks: the first three are used to store data strips, and the
fourth is used to store the parity of those strips.

RAID 5(LEVEL 5)
• Modification of Level 4
• Distributes parity strips across disks
Avoids Level 4 bottleneck
• Disadvantage
Complicated to regenerate data from failed device

Carolyn 132
RAID Level 5 with four disks.
Notice how the parity strips are distributed among the disks.
RAID 5 is the most commonly used category of RAID.
It is the most versatile form of RAID.

Recommended Applications:
(i) File and application servers
(ii) Database servers
(iii) WWW, E-mail, and News servers
(iv) Intranet servers

RAID 6(LEVEL 6)
• Provides extra degree of error protection/correction
Two different parity calculations (double parity)
Same as level four/five and independent algorithm
Parities stored on separate disk across array
Stored in corresponding data strip
• Advantage: data restoration even if two disks fail

RAID Level 6.
Notice how parity strips and data check (DC) strips are distributed across the disks

Nested raid levels


• Combines multiple RAID levels (complex)

A raid level 10 system

Carolyn 133
Other RAID levels such as level 0+1,1+0,6,7,10,30,50, 53 are also available.

Size of the Drives in the array: With few exceptions, all RAID arrays are constrained
by the size of the smallest drive(s) in the array--any drives larger than this will not utilize
their additional capacity. This means that if you have an array with five 9 GB drives and
you add a new 36 GB drive to it, 75% of the drive (27 GB) capacity of the new drive will
sit there wasted.
Interfaces used for Connecting RAID:
For slower data transfer rates, the IDE (Integrated drive Electronics) otherwise known as ATA
(Advanced technology attachment) technology is used. For higher data transfer rates, the SCSI
(Small computer Systems interface) is used. SATA stands for Serial ATA.

COMPUTER CLOCKING SYSTEM

A clock refers to a microchip that regulates the timing and speed of all computer functions. In
the chip is a crystal that vibrates at a specific frequency when electricity is applied. The shortest
time any computer is capable of performing is one clock, or one vibration of the clock chip. The
speed of a computer processor is measured in clock speed, for example, 1 MHz is one million
cycles, or vibrations, a second. 2 GHz is two billion cycles, or vibrations, a second. Another
name for the internal clock or RTC (real-time clock).

Clock Hardware
Types of clocks:
1. Low-end clocks--They are tied to the 110- or 220-volt power line, and cause an interrupt on
every voltage cycle, at 50 or 60 Hz. These are essentially extinct in modern PCs.

2. Programmable Clocks

 these clocks are built out of three components: a crystal oscillator, a counter, and a holding
register.

Carolyn 134
 When a piece of quartz crystal is properly cut and mounted under tension, it can be made to
generate a periodic signal of very high accuracy, typically in the range of 5 to 200 MHz,
depending on the crystal chosen.
 At least one such circuit is usually found in any computer, providing a synchronizing signal to the
computer's various circuits.
 This signal is fed into the counter to make it count down to zero. When the counter gets to zero, it
causes a CPU interrupt. Computers whose advertised clock rate is higher than 200 MHz normally
use a slower clock and a clock multiplier circuit.

Programmable clocks modes

Programmable clocks typically have several modes of operation.

(a) In one-shot mode, when the clock is started, it copies the value of the holding register
into the counter and then decrements the counter at each pulse from the crystal. When the
counter gets to zero, it causes an interrupt and stops until it is explicitly started again by
the software.
(b) In square-wave mode, after getting to zero and causing the interrupt, the holding register
is automatically copied into the counter, and the whole process is repeated again
indefinitely. These periodic interrupts are called clock ticks.

 The advantage of the programmable clock is that its interrupt frequency can be controlled by
software. If a 1-MHz crystal is used, then the counter is pulsed every microsecond. With 16-
bit registers, interrupts can be programmed to occur at intervals from 1 microsecond to 65.536
milliseconds.
 Programmable clock chips usually contain two or three independently programmable clocks
and have many other options as well (e.g., counting up instead of down, interrupts disabled,
and more).

Clock Software

All the clock hardware does is generate interrupts at known intervals. Everything else involving
time must be done by the software, the clock driver. The exact duties of the clock driver vary
among operating systems, but usually include most of the following:

1. Maintaining the time of day.


2. Preventing processes from running longer than they are allowed to.
3. Accounting for CPU usage.

Carolyn 135
4. Handling the alarm system call made by user processes.
5. Providing watchdog timers for parts of the system itself.
6. Doing profiling, monitoring, and statistics gathering.

EXPLORATION OF COMPUTER TERMINALS

Computer Terminal

Terminals

1. hardware
1. memory mapped - video controllers, character or pixel based
2. serial line - rs232, uarts
3. network terminals - x terms, sunray; servers, clients
2. software
1. buffering - speed disparities
2. key mapping
3. character processing - control-h, newlines, raw, cooked;
4. output - interrupt driven; scrolling; special characters, bell, form feed; cursors;
escape sequences

A computer terminal is an electronic or electromechanical hardware device that is used for


entering data into, and displaying data from, a computer or a computing system.

1. Text terminals

A text terminal, or often just terminal (sometimes text console) is a serial computer interface for
text entry and display.

2. Graphical terminals

A graphical terminal can display images as well as text. Graphical terminals are divided into
vector-mode terminals, and raster mode.

(i) A vector-mode - display directly draws lines on the face of a cathode-ray tube under
control of the host computer system. The lines are continuously formed, but since the
speed of electronics is limited, the number of concurrent lines that can be displayed at
one time is limited.
(ii) A raster-mode- modern graphic displays are descended from the picture scanning
techniques used for television, in which the visual elements are a rectangular array of
pixels. Since the raster image is only perceptible to the human eye as a whole for a very

Carolyn 136
short time, the raster must be refreshed many times per second to give the appearance of
a persistent display.

Modes

Terminals can operate in various modes, relating to when they send input typed by the user on
the keyboard to the receiving system (whatever that may be):

 Character mode (a.k.a. character-at-a-time mode): In this mode, typed input is sent
immediately to the receiving system.
 Line mode (a.k.a. line-at-a-time mode): In this mode, the terminal provides a local line
editing function, and sends an entire input line, after it has been locally edited, when the
user presses a return key.
 Block mode (a.k.a. screen-at-a-time mode): In this mode, the terminal provides a local
full-screen data function. the completed form, consisting of all the data entered on the
screen, to the receiving system when the user presses an ? Enter key.

Terminal hardware

Basically, RS-232 terminals are the hardware devices that contains a keyboard and a display,
communicate using a serial interface, one bit at a time.

The other pins are used for various control functions, but most of which are not used.

Serial Lines

Serial lines are lines in which characters are sent one bit at a time.

All the modems use this interface.

VIRTUAL DEVICES
A Virtual Device refers to any device that performs the same function as a physical device but
does so using virtual resources.

1. BUFFERING
A buffer is a memory area that stores data being transferred between two devices or between a
device and an application.

Buffering: Buffering is the name given to the technique of transferring data into temporary
storage prior to processing or output, thus enabling the simultaneous operation of devices.
I/O buffering is the process of temporarily storing data that is passing between a processor and a
peripheral. The usual purpose is to smooth out the difference in rates at which the two devices
can handle data.

Carolyn 137
Uses of I/O Buffering:

 Buffering is done to deal effectively with a speed mismatch between the producer and
consumer of the data stream.
 A buffer is produced in main memory to heap up the bytes received from modem.
 After receiving the data in the buffer, the data get transferred to disk from buffer in a
single operation.
 This process of data transfer is not instantaneous; therefore, the modem needs another
buffer in order to store additional incoming data.
 When the first buffer got filled, then it is requested to transfer the data to disk.
 The modem then starts filling the additional incoming data in the second buffer while the
data in the first buffer getting transferred to disk.
 When both the buffers completed their tasks, then the modem switches back to the first
buffer while the data from the second buffer get transferred to the disk.
 The use of two buffers disintegrates the producer and the consumer of the data, thus
minimizes the time requirements between them.
 Buffering also provides variations for devices that have different data transfer sizes.

Types of various I/O buffering techniques:

1. Single buffer:
A buffer is provided by the operating system to the system portion of the main memory.

Block oriented device –

 System buffer takes the input.


 After taking the input, the block gets transferred to the user space by the process and then
the process requests for another block.
 Two blocks work simultaneously, when one block of data is processed by the user
process, the next block is being read in.
 OS can swap the processes.
 OS can record the data of system buffer to user processes.

Stream oriented device –

 Line- at a time operation is used for scroll made terminals. User inputs one line at a time,
with a carriage return signaling at the end of a line.
 Byte-at a time operation is used on forms mode, terminals when each keystroke is
significant.

Carolyn 138
2. Double buffer:

(i) Block oriented –

 There are two buffers in the system.


 One buffer is used by the driver or controller to store data while waiting for it to be taken
by higher level of the hierarchy.
 Other buffer is used to store data from the lower level module.
 Double buffering is also known as buffer swapping.
 A major disadvantage of double buffering is that the complexity of the process get
increased.
 If the process performs rapid bursts of I/O, then using double buffering may be deficient.

(ii) Stream oriented –

 Line- at a time I/O, the user process need not be suspended for input or output, unless
process runs ahead of the double buffer.
 Byte- at a time operation, double buffer offers no advantage over a single buffer of twice
the length.

3. Circular buffer:

 When more than two buffers are used, the collection of buffers is itself referred to as a
circular buffer.
 In this, the data do not directly pass from the producer to the consumer because the data
would change due to overwriting of buffers before they had been consumed.
 The producer can only fill up to buffer i-1 while data in buffer i is waiting to be
consumed.

Carolyn 139
2. SPOOLING
Spooling's name comes from the acronym for Simultaneous Peripheral Operation On-Line
(SPOOL).
• A spool is a buffer that holds output for a device, such as a printer, that cannot accept
interleaved data streams
• Spooling overlaps input of one job with the computation of other jobs
• The spooler may be reading the input of one job while printing the output of a different
job as shown in next below

Disk

I/O

Online
CPU
Card Reader Line Printer

Spooling technique
• A high-speed device like a disk is interposed between a running program and a low speed
device involved with the program input/output
• Communication between a high-speed device and low speed device is isolated
• High speed device transfers the data to the spool
• Low speed device gets the data from the spool
Advantages
• Performance of the system is increased

Carolyn 140
• CPU and I/O devices work more efficiently
• Leads naturally to multiprogramming
• Also used for processing data at remote sites

3. CACHING
When a computer caches a file, it stores a local copy that can be accessed at the speed of the local hard
drive.

 Caching involves keeping a copy of data in a faster-access location than where the data is
normally stored.
 Buffering and caching are very similar, except that a buffer may hold the only copy of a
given data item, whereas a cache is just a duplicate copy of some other data stored
elsewhere.
 Buffering and caching go hand-in-hand, and often the same storage space may be used
for both purposes. For example, after a buffer is written to disk, then the copy in memory
can be used as a cached copy, (until that buffer is needed for other purposes.)

Carolyn 141
CHAPTER FIVE
FILE MANAGEMENT
File management is one of the basic and important features of operating system. Operating
system is used to manage files of computer system. All the files with different extensions are
managed by operating system.
A file is collection of specific information stored in the memory of computer system.
File management is defined as the process of manipulating files in a computer system, its
management includes the process of creating, modifying and deleting the files.

The following are some of the tasks performed by file management of operating system of
any computer system:

1. It helps to create new files in computer system and placing them at the specific locations.
2. It helps in easily and quickly locating these files in computer system.
3. It makes the process of sharing of the files among different users very easy and user
friendly.
4. It helps to stores the files in separate folders known as directories. These directories help
users to search file quickly or to manage the files according to their types or uses.
5. It helps the user to modify the data of files or to modify the name of the file in the
directories.

Objective of File management System

Here are the main objectives of the file management system:

 It provides I/O support for a variety of storage device types.


 Minimizes the chances of lost or destroyed data
 Helps OS to standardized I/O interface routines for user processes.
 It provides I/O support for multiple users in a multiuser systems environment.

A file system is a process that manages how and where data on a storage disk, typically a hard
disk drive (HDD), is stored, accessed and managed. It is a logical disk component that manages a
disk's internal operations as it relates to a computer and is abstract to a human user.

Properties of a File System

Here, are important properties of a file system:

 Files are stored on disk or other storage and do not disappear when a user logs off.

Carolyn 142
 Files have names and are associated with access permission that permits controlled
sharing.
 Files could be arranged or more complex structures to reflect the relationship between
them.

File structure

A File Structure needs to be predefined format in such a way that an operating system
understands. It has an exclusively defined structure, which is based on its type.

Files can be structured in several ways. Three common possibilities are discussed below;

a) Byte sequence- It is unstructured sequence of bytes. In effect, an OS does not know or


care what is in the files. All it sees are bytes. Any meaning must be imposed by user-level
programs.
b) Record sequence- It is the first step up in structure. In this model, a file is a sequence of
fixed-length records, each with some internal structure. Central to the idea of a file being
a sequence of records is the idea that the read operation returns one record and the write
operations overwrites or appends one record.
c) Record tree- A file consists of a tree of records, not necessarily all the same length, each
containing a key field in a fixed position in the record.

Three types of files structure in OS:

 A text file: It is a series of characters that is organized in lines.


 An object file: It is a series of bytes that is organized into blocks.
 A source file: It is a series of functions and processes.

File Attributes

A file has a name and data. Moreover, it also stores meta information like file creation date and
time, current size, last modified date, etc. All this information is called the attributes of a file
system.

Here, are some important File attributes used in OS:

 Name: It is the only information stored in a human-readable form.


 Identifier: Every file is identified by a unique tag number within a file system known as
an identifier.
 Location: Points to file location on device.
 Type: This attribute is required for systems that support various types of files.
 Size. Attribute used to display the current file size.

Carolyn 143
 Protection. This attribute assigns and controls the access rights of reading, writing, and
executing the file.
 Time, date and security: It is used for protection, security, and also used for monitoring

File Type

It refers to the ability of the operating system to differentiate various types of files like text files,
binary, and source files. However, Operating systems like MS_DOS and UNIX has the following
type of files:

Character Special File

It is a hardware file that reads or writes data character by character, like mouse, printer, and
more.

Ordinary files

 These types of files stores user information.


 It may be text, executable programs, and databases.
 It allows the user to perform operations like add, delete, and modify.

Directory Files

 Directory contains files and other related information about those files. Its basically a
folder to hold and organize multiple files.

Special Files

 These files are also called device files. It represents physical devices like printers, disks,
networks, flash drive, etc.

Functions of File

 Create file, find space on disk, and make an entry in the directory.
 Write to file, requires positioning within the file
 Read from file involves positioning within the file
 Delete directory entry, regain disk space.
 Reposition: move read/write position.

Commonly used terms in File systems

Carolyn 144
Field:

This element stores a single value, which can be static or variable length.

DATABASE:

Collection of related data is called a database. Relationships among elements of data are explicit.

FILES:

Files is the collection of similar record which is treated as a single entity.

RECORD:

A Record type is a complex data type that allows the programmer to create a new data type with
the desired column structure. Its groups one or more columns to form a new data type. These
columns will have their own names and data type.

File Access Methods

File access is a process that determines the way that files are accessed and read into memory.
Generally, a single access method is always supported by operating systems. Though there are
some operating system which also supports multiple access methods.

Three file access methods are:

 Sequential access
 Direct random access
 Index sequential access

Sequential Access

In this type of file access method, records are accessed in a certain pre-defined sequence. In the
sequential access method, information stored in the file is also processed one by one. Most
compilers access files using this access method.

Random Access

The random access method is also called direct random access. This method allows accessing the
record directly. Each record has its own address on which can be directly accessed for reading
and writing.

Carolyn 145
Sequential Access

This type of accessing method is based on simple sequential access. In this access method, an
index is built for every file, with a direct pointer to different memory blocks. In this method, the
Index is searched sequentially, and its pointer can access the file directly. Multiple levels of
indexing can be used to offer greater efficiency in access. It also reduces the time needed to
access a single record.

Space Allocation

In the Operating system, files are always allocated disk spaces.

Three types of space allocation methods are:

 Linked Allocation
 Indexed Allocation
 Contiguous Allocation

Contiguous Allocation

In this method,

 Every file users a contiguous address space on memory.


 Here, the OS assigns disk address is in linear order.
 In the contiguous allocation method, external fragmentation is the biggest issue.

Linked Allocation

In this method,

 Every file includes a list of links.


 The directory contains a link or pointer in the first block of a file.
 With this method, there is no external fragmentation
 This File allocation method is used for sequential access files.
 This method is not ideal for a direct access file.

Indexed Allocation

In this method,

 Directory comprises the addresses of index blocks of the specific files.


 An index block is created, having all the pointers for specific files.
 All files should have individual index blocks to store the addresses for disk space.

File Directories

Carolyn 146
A single directory may or may not contain multiple files. It can also have sub-directories inside
the main directory. Information about files is maintained by Directories. In Windows OS, it is
called folders.

Single Level Directory

Following is the information which is maintained in a directory:

 Name The name which is displayed to the user.


 Type: Type of the directory.
 Position: Current next-read/write pointers.
 Location: Location on the device where the file header is stored.
 Size : Number of bytes, block, and words in the file.
 Protection: Access control on read/write/execute/delete.
 Usage: Time of creation, access, modification

File types- name, extension

File Type Usual extension Function

Executable exe, com, bin or none ready-to-run machine- language program

Object obj, o complied, machine language, not linked

Carolyn 147
Source code c. p, pas, 177, asm, a source code in various languages

Batch bat, sh Series of commands to be executed

Text txt, doc textual data documents

Word processor doc,docs, tex, rrf, etc. various word-processor formats

Library lib, h libraries of routines

Archive arc, zip, tar related files grouped into one file, sometimes compressed.

Summary:

 A file is a collection of correlated information which is recorded on secondary or non-


volatile storage like magnetic disks, optical disks, and tapes.
 It provides I/O support for a variety of storage device types.
 Files are stored on disk or other storage and do not disappear when a user logs off.
 A File Structure needs to be predefined format in such a way that an operating system
understands it.
 File type refers to the ability of the operating system to differentiate different types of
files like text files, binary, and source files.
 Create find space on disk and make an entry in the directory.
 Indexed Sequential Access method is based on simple sequential access
 In Sequential Access method records are accessed in a certain pre-defined sequence
 The random-access method is also called direct random access
 Three types of space allocation methods are:
o Linked Allocation
o Indexed Allocation
o Contiguous Allocation
 Information about files is maintained by Directories

File Structure

Carolyn 148
A File Structure should be according to a required format that the operating system can
understand.
 A file has a certain defined structure according to its type.
 A text file is a sequence of characters organized into lines.
 A source file is a sequence of procedures and functions.
 An object file is a sequence of bytes organized into blocks that are understandable by the
machine.
 When operating system defines different file structures, it also contains the code to
support these file structure. Unix, MS-DOS support minimum number of file structure.

File Type

File type refers to the ability of the operating system to distinguish different types of file such as
text files source files and binary files etc. Many operating systems support many types of files.
Operating system like MS-DOS and UNIX have the following types of files −
Ordinary files

 These are the files that contain user information.


 These may have text, databases or executable program.
 The user can apply various operations on such files like add, modify, delete or even
remove the entire file.
Directory files

 These files contain list of file names and other information related to these files.
Special files

 These files are also known as device files.


 These files represent physical device like disks, terminals, printers, networks, tape drive
etc.
These files are of two types −
 Character special files − data is handled character by character as in case of terminals
or printers.
 Block special files − data is handled in blocks as in the case of disks and tapes.

File Access Mechanisms

File access mechanism refers to the manner in which the records of a file may be accessed.
There are several ways to access files −

Carolyn 149
 Sequential access
 Direct/Random access
 Indexed sequential access
Sequential access
A sequential access is that in which the records are accessed in some sequence, i.e., the
information in the file is processed in order, one record after the other. This access method is
the most primitive one. Example: Compilers usually access files in this fashion.
Direct/Random access
 Random access file organization provides, accessing the records directly.
 Each record has its own address on the file with by the help of which it can be directly
accessed for reading or writing.
 The records need not be in any sequence within the file and they need not be in adjacent
locations on the storage medium.
Indexed sequential access

 This mechanism is built up on base of sequential access.


 An index is created for each file which contains pointers to various blocks.
 Index is searched sequentially and its pointer is used to access the file directly.

Space Allocation

Files are allocated disk spaces by operating system. Operating systems deploy following three
main ways to allocate disk space to files.

 Contiguous Allocation
 Linked Allocation
 Indexed Allocation
Contiguous Allocation

 Each file occupies a contiguous address space on disk.


 Assigned disk address is in linear order.
 Easy to implement.
 External fragmentation is a major issue with this type of allocation technique.
Linked Allocation

 Each file carries a list of links to disk blocks.


 Directory contains link / pointer to first block of a file.
 No external fragmentation

Carolyn 150
 Effectively used in sequential access file.
 Inefficient in case of direct access file.
Indexed Allocation

 Provides solutions to problems of contiguous and linked allocation.


 A index block is created having all pointers to files.
 Each file has its own index block which stores the addresses of disk space occupied by
the file.
 Directory contains the addresses of index blocks of files
TYPES OF FILE SYSTEM.
The following are the different types of file systems.
FAT File System
FAT stands for “File Allocation Table”. The file allocation table is used by the operating
system to locate files on a disk. A file may be divided into many sections and scattered around
the disk due to fragmentation. FAT keeps track of all pieces of a file. In DOS systems, FAT is
stored after boot sector. The file system has been used since the advent of PC.
Features of FAT File System
Some important features of the FAT File System are as follows.
Naming Convention
 FAT file system used by MS-DOS provides file name of only 8 characters long.
 FAT file system used by Windows 2000 supports long file name. The full path of file including
filename can be up to 255 characters long.
 File names can contain any character except “/ [] =, ^? a “”
 File names should begin with alphanumeric characters.
 File names can contain spaces and multiple periods. The characters after the last period are
treated as file extension.
Security
FAT does not support local and folder security. A user logged on a computer locally has full
access to the files and folders in FAT partitions of the computer.
Quick Access to Files
FAT provides quick access to files. The speed of file access depends on file type, file size,
partition size, fragmentation and number of files in a folder.
FAT32 File System
FAT32 is an advanced version of FAT file system. It can be used on drives from 512 MB to 2TB
in size. One of the most important features of FAT and FAT32 is that they offer compatibility
with operating systems other than Windows 2000 also.
Features of FAT32 File System
FAT32 has the following features.

Carolyn 151
Partition Size
FAT32 increases the number of bits used to address cluster. A cluster is a set of sectors. It
reduces the size of each cluster. It supports larger disk (up to 2TB) and better storage efficiency.
Access Speed
FAT32 provides good file access in partition sizes less than 500 MB or greater than 2 GB. It
provides better disk space utilization.
NTFS File System
NTFS stands for “New Technology File System”. Windows 2000 professional fully supports
NTFS. It has the following characteristics.
Features of NTFS File System
The following are some of the main features of NTFS File System.
Naming Conventions
 File names can be up to 255 characters
 File names can contain most characters except “/ < > * |:
 File names are not case sensitive
Security
NTFS provides file and folder security. Files and folders are safer than FAT. Security is
maintained by assigning NTFS permissions to files and folders. Security is maintained at the
local level and the network level. The permissions can be assigned to individual files and folders.
Each file or folder in an NTFS partition has an Access Control List. It contains the users and
group security identifier (SID) and the privileges granted to them.
Partition Size
The NTFS partition and file sizes are much bigger than FAT partitions and files. The maximum
size of an NTFS partition or file can be 16 Exabyte. However, the practical limitation is two
Terabytes. The file size can be in the range of 4GB to 64GB.
File Compression
NTFS provides file compression of as much as 50%.
High Reliability
NTFS is highly reliable. It is recoverable file system. It uses transaction logs to update the file
and folders logs automatically. The system also has a great amount of fault tolerance. It means
that if transaction fails due to power or system failure, the logged transactions are used to recover
the data.
Bad Cluster Mapping
NTFS supports bad-cluster mapping. It means that file system detects bad clusters or areas of
disk with errors. If there is any data in those clusters, it is retrieved and stored on another area.
The bad clusters are marked to prevent data storage in those areas in future.

Lecturers Addition notes

Carolyn 152
Types of Files

There are three basic types of files:

regular Stores data (text, binary, and executable).


directory Contains information used to access other files.
special Defines a FIFO (first-in, first-out) pipe file or a physical device.

All file types recognized by the system fall into one of these categories. However, the
operating system uses many variations of these basic types.

Regular Files

Regular files are the most common files. Another name for regular files is ordinary
files. Regular files contain data.

Text Files

Text files are regular files that contain information readable by the user. This
information is stored in ASCII. You can display and print these files. The lines of a
text file must not contain NUL characters, and none can exceed {LINE_MAX} bytes
in length, including the new-line character.

The term text file does not prevent the inclusion of control or other nonprintable
characters (other than NUL). Therefore, standard utilities that list text files as inputs
or outputs are either able to process the special characters gracefully or they explicitly
describe their limitations within their individual sections.

Binary Files

Binary files are regular files that contain information readable by the computer.
Binary files may be executable files that instruct the system to accomplish a job.
Commands and programs are stored in executable, binary files. Special compiling
programs translate ASCII text into binary code.

The only difference between text and binary files is that text files have lines of less
than {LINE_MAX} bytes, with no NUL characters, each terminated by a new-line
character.

Carolyn 153
Directory Files

Directory files contain information the system needs to access all types of files, but
they do not contain the actual file data. As a result, directories occupy less space than
a regular file and give the file system structure flexibility and depth. Each directory
entry represents either a file or a subdirectory. Each entry contains the name of the file
and the file's index node reference number (i-node). The i-node points to the unique
index node assigned to the file. The i-node describes the location of the data
associated with the file. Directories are created and controlled by a separate set of
commands.

See Directory Overview for more information.

Special Files

Special files define devices for the system or temporary files created by processes.
There are three basic types of special files: FIFO (first-in, first-out), block, and
character. FIFO files are also called pipes. Pipes are created by one process to
temporarily allow communication with another process. These files cease to exist
when the first process finishes. Block and character files define devices.

Every file has a set of permissions (called access modes) that determine who can read,
modify, or execute the file.

To learn more about file access modes, see File and Directory Access Modes .

File Naming Conventions

The name of each file must be unique within the directory where it is stored. This
ensures that the file also has a unique path name in the file system. File-naming
guidelines are:

 A file name can be up to 255 characters long and can contain letters, numbers,
and underscores.
 The operating system is case-sensitive, which means it distinguishes between
uppercase and lowercase letters in file names. Therefore, FILEA, FiLea,
and filea are three distinct file names, even if they reside in the same
directory.
 File names should be as descriptive and meaningful as possible.
 Directories follow the same naming conventions as files.

Carolyn 154
 Certain characters have special meaning to the operating system and should be
avoided when naming files. These characters include the following:
/ \ " ' * ; - ? [ ] ( ) ~ ! $ { } < > # @ & |

 A file name is hidden from a normal directory listing if it begins with a . (dot).
When the li or ls command is entered with the -a flag, the hidden files are listed
along with regular files and directories.

File Path Names

The path name for each file and directory in the file system consists of the names of
every directory that precedes it in the tree structure.

Since all paths in a file system originate from the /(root) directory, each file in the file
system has a unique relationship to the root directory known as the absolute path
name. Absolute path names begin with the / (slash) symbol. The absolute path name
of file h within the Example File System is /B/C/h. Notice that there are two files
named g. Because the absolute paths to these files are different, /B/g and /B/C/g, each
file named g has a unique name within the system. Every component of a path name is
a directory except the final component. The final component of a path name can be a
file name.

Note: Path names cannot exceed 1023 characters.


Pattern Matching with Wildcards and Metacharacters

Wildcard characters provide a convenient way for specifying multiple file or directory
names with one character. The two wildcard characters are * (asterisk) and ? (question
mark). The metacharacters are [ ] (open and close square brackets), - (dash), and !
(exclamation mark).

* Wildcard

Use the * to match any sequence or string of characters. The * means any characters,
including no characters. For example, if you have the following files in your
directory:
1test 2test afile1 afile2 bfile1 file file1 file10 file2 file3

and you want to refer to only to the files that begin with file, you would use:
file*

Carolyn 155
The files selected would be: file file1 file10 file2 file3

To refer to only the files that contain the word file, you would use:
*file*

The files selected would be: afile1 afile2 bfile1 file file1 file10
file2 file3

? Wildcard

Use the ? to match any one character. The ? means any single character.

To refer to only the files that start with file and end with a single character, use:
file?

The files selected would be: file1 file2 file3

To refer to only the files that start with file and end with any two characters, use:
file??

The file selected would be: file10

[ ] Shell Metacharacters

Metacharacters offer another type of wildcard notation by enclosing the desired


characters within [ ]. It is like using the ?, but it allows you to choose specific
characters to be matched. The [ ] also allow you to specify a range of values using the
- (hyphen). To specify all the letters in the alphabet, use [[:alpha:]]. To specify all the
lowercase letters in the alphabet, use [[:lower:]].

To refer to only the files that end in 1 or 2, use:


*file[12]

The files selected would be: afile1 afile2 file1 file2

To refer only to the files that start with any number, use:
[0123456789]* or [0-9]*

Carolyn 156
The files selected would be: 1test 2test

To refer only to the files that don't begin with an a, use:


[!a]*

The files selected would be: 1test 2test bfile1 file file1 file10
file2 file3

Pattern Matching versus Regular Expressions

Regular expressions allow you to select specific strings from a set of character strings.
The use of regular expressions is generally associated with text processing.

Regular expressions can represent a wide variety of possible strings. While many
regular expressions can be interpreted differently depending on the current locale,
internationalization features provide for contextual invariance across locales.

See the examples in the following comparison between File Matching Patterns and
Regular Expressions:
Pattern Matching Regular Expression
* .*
? .
[!a] [^a]
[abc] [abc]
[[:alpha:]] [[:alpha:]]

See the awk command in the AIX Version 4.3 Commands Reference for the exact
syntax.

Carolyn 157

You might also like