OS Lecture
OS Lecture
INDEX
1.0 OBJECTIVE
The objective of this lesson is to make the students familiar with the basics of
operating system. After studying this lesson they will be familiar with:
1. What is an operating system?
2. Important functions performed by an operating system.
3. Different types of operating systems.
1. 1 INTRODUCTION
Operating system (OS) is a program or set of programs, which acts as an
interface between a user of the computer & the computer hardware. The main
purpose of an OS is to provide an environment in which we can execute
programs. The main goals of the OS are (i) To make the computer system
convenient to use, (ii) To make the use of computer hardware in efficient way.
Operating System is system software, which may be viewed as collection of
software consisting of procedures for operating the computer & providing an
environment for execution of programs. It’s an interface between user &
computer. So an OS makes everything in the computer to work together
smoothly & efficiently.
Application programs
Operating System
Computer Hardware
CPU
Card Line
Reader printer
On-line
Off-line
Line
Card
printer
reader
CPU
SPOOLING
Process 3 Process 1
Extended Machine
Bare
Machine
Process 4 Process 2
Program 2
P1 P2 P1 P2 P1 Time
CPU- activity
2.0 Objectives
A file is a logical collection of information and file system is a collection of files. The
objective of this lesson is to discuss the various concepts of file system and make the
students familiar with the different techniques of file allocation and access methods. We also
discuss the ways to handle file protection, which is necessary in an environment where
multiple users have access to files and where it is usually desirable to control by whom and
in what ways files may be accessed.
2.1 Introduction
The file system is the most visible aspect of an operating system. While the memory
manager is responsible for the maintenance of primary memory, the file manager is
responsible for the maintenance of secondary storage (e.g., hard disks). It provides the
mechanism for on-line storage of and access to both data and programs of the operating
system and all the users of the computer system. The file system consists to two distinct
parts: a collection of files, each storing related data and a directory structure, which
organizes and provides information about all the files in the system. Some file systems have
a third part, partitions, which are used to separate physically or logically large collections of
directories.
Nutt describes the responsibility of the file manager and defines the file, the
fundamental abstraction of secondary storage:
"Each file is a named collection of data stored in a device. The file manager
implements this abstraction and provides directories for organizing files. It also
provides a spectrum of commands to read and write the contents of a file, to set the
file read/write position, to set and use the protection mechanism, to change the
ownership, to list files in a directory, and to remove a file...The file manager provides
a protection mechanism to allow machine users to administer how processes
executing on behalf of different users can access the information in files. File
protection is a fundamental property of files because it allows different people to
User Directory
File Locations Length
The major problem with contiguous allocation is locating the space for a new file. The
contiguous disk space-allocation problem can be seen to be particular application of the
general dynamic storage-allocation problem, which is how to satisfy a request of size n
from a list of free holes. First-fit (This strategy allocates the first available space that is
big enough to accommodate file. Search may start at beginning of set of holes or where
previous first-fit ended. Searching stops as soon as it finds a free hole that is large
enough) and best-fit (This strategy allocates the smallest hole that is big enough to
accommodate file. Entire list ordered by size is searched & matching smallest left over
hole is chosen) are the most common strategies used to select a free hole from the set of
available holes. Simulations have shown that both first-fit and best-fit are more efficient
than worst-fit (This strategy allocates the largest hole. Entire list is searched. It chooses
largest left over hole) in terms of both time and storage utilization. Neither first-fit nor
best-fit is clearly best in terms of storage utilization, but first-fit is generally faster.
These algorithms suffer from the problem of external fragmentation i.e. the tendency to
develop a large number of small holes. As files are allocated and deleted, the free disk space
is broken into little pieces. External fragmentation exists whenever free space is broken into
chunks. It becomes a problem when the largest contiguous chunk is insufficient for a request;
storage is fragmented into a number of holes, no one of which is large enough to store the
data. Depending on the total amount of disk storage and the average file size, external
fragmentation may be either a minor or a major problem.
Some older microcomputer systems used contiguous allocation on floppy disks. To prevent
loss of significant amounts of disk space to external fragmentation, the user had to run a
User Directory
File Location
Data Data
Data Data
Space required for the pointers is another disadvantage to linked allocation. If a pointer
requires 4 bytes out of a 512-byte block, then ((4 / 512) * 100 = 0.78) percent of the disk is
being used for pointers, rather than for information. Each file requires slightly more space
than it otherwise would. The usual solution to this problem is to collect blocks into multiples,
called clusters, and to allocate the clusters rather than blocks. For instance, the file system
may define a cluster as 4 blocks, and operate on the disk in only cluster units. Pointers then
use a much smaller percentage of the file's disk space. This method allows the logical-to-
physical block mapping to remain simple, but improves disk throughput (fewer disk head-
seeks) and decreases the space needed for block allocation and free-list management. The
cost of this approach is an increase in internal fragmentation, because more space is wasted if
a cluster is partially fully than when a block is partially full. Clusters can be used to improve
the disk access time for many other algorithms, so they are used in most operating systems.
Data Data
Data Data
Data Data
3.0 Objectives
The objectives of this lesson are to make the students familiar with directory system and
file protection mechanism. After studying this lesson students will become familiar with:
(a) Different types of directory structures.
(b) Different protection structures such as:
- Access Control Matrix
- Access Control Lists
3.1 Introduction
A file system provides the following facilities to its users: (a) Directory structure and file
naming facilities, (b) Protection of files against illegal form of access, (c) Static and dynamic
sharing of files, and (d) Reliable storage of files. A file system helps the user in organizing
the files through the use of directories. A directory may be defined as an object that contains
the names of the file system objects. Entries in the directory determine the names associated
with a file system object. A directory contains information about a group of files. A typical
structure of a directory entry is as under:
File name – Locations Information – Protection Information – Flags
The presences of directories enable file system to support file sharing and protection.
Sharing is simply a matter of permitting a user to access the files of other user stored in
some other directory. Protection is implemented by permitting the owner of a file to
specify which other users may access his files and in what manner. All these issues are
discussed in detail in this lesson.
3.2 Presentation of Contents
3.2.1 Hierarchical Directory Systems
3.2.1.1 Directory Structure
3.2.1.2 The Logical Structure of a Directory
3.2.1.2.1 Single-level Directory
3.2.1.2.2 Two-level Directory
3.2.1.2.3 Tree-structured Directories
root
x y z
Z1 Z2 Z3
Z11
Files
When in a UFD a user refers to a particular file, only his own UFD is searched. Thus,
different users may have files with the same name, as long as till the filenames within each
UFD are unique.
To create a file for a user, the operating system searches only that user's UFD to ascertain
whether another file of that name exists. To delete a file, the operating system confines its
search to the local UFD; thus, it cannot accidentally delete another user's file that has the
same name.
The user directories themselves must be created and deleted as necessary. A special system
program is run with the appropriate user name and account information. The program creates
a new user file directory and adds an entry for it to the master file directory. The execution of
this program might be restricted to system administrators.
The two-level directory structure solves the name-collision problem, but it still has problems.
This structure effectively isolates one user from another. This isolation is an advantage when
the users are completely independent, but is a disadvantage when the users co-operate on
some task and to access one user's account by other users is not allowed.
If access is to be permitted, one user must have the ability to name a file in another user's
directory.
A two-level directory can be thought of as a tree, or at least an inverted tree. The root of the
tree is the master file directory. Its direct descendants are the UFDs. The descendants of the
user file directories are the files themselves.
Thus, a user name and a file name define a path name. Every file in the system has a path
name. To name a file uniquely, user must know the path name of the file desired.
For example, if user A wishes to access her own test file named test, she can simply refer to
test. To access the test file of user B (with directory-entry name userb), however, she might
have to refer to /userb/test. Every system has its own syntax for naming files in directories
other than the user's own.
There is additional syntax to specify the partition of a file. For instance, in MS-DOS a letter
followed by a colon specifies a partition. Thus, file specification might be "C:\userb\bs.test".
A1 A2 B1 B2 C1 C2 C3
A B
A1 A2 A3 A4 B1 B2 B3
A B C
A1 A2 A3 C1 C2 C3 C4
X Y C41 C42
3.0 OBJECTIVE
The objective of this lesson is to make the students familiar with the various
issues of CPU scheduling. After studying this lesson, they will be familiar with:
1. Process states & transitions.
2. Different types of scheduler
3. Scheduling criteria
4. Scheduling algorithms
3.1 INTRODUCTION
In nearly every computer, the resource that is most often requested is the CPU or
processor. Many computers have only one processor, so this processor must be
shared via time-multiplexing among all the programs that need to execute on the
an executing program.
distinction between a program & the activity of executing a program. The former
process encompasses the current status of the activity, called the process state.
This state includes the current position in the program being executed (the value
of the program counter) as well as the values in the other CPU registers & the
the machine at that time. At different times during the execution of a program (at
observed."
The operating system is responsible for managing all the processes that are
running on a computer & allocated each process a certain amount of time to use
the processor. In addition, the operating system also allocates various other
resources that processes will need such as computer memory or disks. To keep
track of the state of all the processes, the operating system maintains a table
known as the process table. Inside this table, every process is listed along with
the resources the processes are using & the current state of the process.
Processes can be in one of three states: running, ready, or waiting (blocked).
The running state means that the process has all the resources it need for
execution & it has been given permission by the operating system to use the
processor. Only one process can be in the running state at any given time. The
remaining processes are either in a waiting state (i.e., waiting for some external
event to occur such as user input or a disk access) or a ready state (i.e., waiting
for permission to use the processor). In a real operating system, the waiting &
ready states are implemented as queues, which hold the processes in these
states.
The assignment of physical processors to processes allows processors to
accomplish work. The problem of determining when processors should be
assigned & to which processes is called processor scheduling or CPU
scheduling.
When more than one process is runable, the operating system must decide
which one first. The part of the operating system concerned with this decision is
called the scheduler, & algorithm it uses is called the scheduling algorithm. In
operating system literature, the term “scheduling” refers to a set of policies &
mechanisms built into the operating system that govern the order in which the
work to be done by a computer system is completed. A scheduler is an OS
module that selects the next job to be admitted into the system & the next
process to run. The primary objective of scheduling is to optimize system
Suspended
Medium Term Scheduler & Swapped
Interactive Out Queue
Programs
Short Term Scheduler
Batch
Jobs
CPU
Batch Queue Ready Queue
Exit
Exit Exit
Suspended
Queue
process’s pages to swap to & from the swapping device: typically once a second.
The short-term scheduler, often termed the dispatcher, executes most frequently
the current process & the new (or continued) execution of another process. Such
opportunities include:
milliseconds,
¾ Expected I/O interrupts, when previous I/O requests are finally satisfied,
¾ Operating system calls, when the running process asks the operating system
The short-term scheduler allocates the processor among the pool of ready
processes resident in memory. Its main objective is to maximize system
performance in accordance with the chosen set of criteria. Since it is in charge of
ready-to-running state transitions, the short-term scheduler must be invoked for
each process switch to select the next process to be run. In practice, the short-
term scheduler is invoked whenever an event (internal or external) causes the
global state of the system to change. Given that any such change could result in
making the running process suspended or in making one or more suspended
processes ready, the short-term scheduler should be run to determine whether
such significant changes have indeed occurred and, if so, to select the next
process to be run. Some of the events occurred and, if so, to select the next
process to be run.
Most of the process-management OS services discussed in this lesson requires
invocation of the short-term scheduler as part of their processing. For example,
creating a process or resuming a suspended one adds another entry to the ready
list (queue), & the scheduler is invoked to determine whether the new entry
should also become the running process. Suspending a running process,
changing priority of the running process, & exiting or aborting a process are also
events that may necessitate selection of a new running process, changing priority
of the running process, & exiting or aborting a process are also events that may
necessitate selection of a new running process. Some operating systems include
an OS call that allows system programmers to cause invocation of the short-term
scheduler explicitly, such as the DECLARE_SIGNIFICANT_EVENT call in the
RSX-11M operating system. Among other things, this service is useful for
invoking the scheduler from user-written event-processing routines, such as
device (I/O) drivers.
As indicated in Figure 2, interactive programs often enter the ready queue
directly after being submitted to the OS, which then creates the corresponding
re-execution.
Medium-term scheduling: the decision to add to (grow) the processes that are
next.
5.0 OBJECTIVE
The lesson presents the principles of managing the main memory, one of the most precious resources in a
multiprogramming system. In our sample hierarchy of OS layers, memory management belongs to layer 3. Memory
management is primarily concerned with allocation of physical memory of finite capacity to requesting processes. No
process may be activated before a certain amount of memory can be allocated to it. The objective of this lesson is to
make the students acquainted with the concepts of contiguous memory management.
5.1 INTRODUCTION
Memory is large array of words or bytes, each having its unique address. CPU fetches instructions from memory
according to value of program counter. The instructions undergo instruction execution cycle. To increase both CPU
utilization & speed of its response to users, computers must keep several processes in memory. Specifically, the memory
management modules are concerned with following four functions:
5.2.2.6 Sharing
5.2.2.7 Evaluation
5.2.3 VARIABLE PARTITIONED MEMORY ALLOCATION
5.2.3.1 Principles of Operation
52.3.2 Compaction
5.2.3.3 Protection
5.2.3.4 Sharing
5.2.3.5 Evaluation
5.2.4 SEGMENTATION
5.2.4.1 Principles of Operation
5.2.4.2 Protection
5.2.4.3 Sharing
5.2.1 SINGLE CONTIGUOUS MEMORY MANAGEMENT
In this scheme, the physical memory is divided into two contiguous areas. One of
them is permanently allocated to the resident portion of the OS. Mostly, the OS
resides in low memory (0 to P as shown in Figure 1). The remaining memory is
allocated to transient or user processes, which are loaded & executed one at a
time, in response to user commands. This process is run to completion & then
the next process is brought in memory.
In this scheme, the starting physical address of the program is known at the time of compilation. The machine contains
absolute addresses. They do not need to be changed or translated at the time of execution. So there is no issue of
relocation or address translation.
00
Transient-Process Area
Max
Figure 1. Single contiguous memory management
In this scheme as there is at most one process is in memory at any given time so there is a rare issue of interference
between programs. However, it is desirable to protect the OS code from being tampered by the executing transient
process.
A common way used in embedded systems to protect the OS code from user
programs is to place the OS in read-only memory. This method is rarely used
because of its inflexibility & inability to patch & update the OS code. In systems
where the OS is in read-write memory, protection from user processes usually
requires some sort of hardware assistance such as the fence registers &
protection bits.
Fence registers are used to draw a boundary between the OS & the transient-process area. Assuming that the resident
portion of the OS is in low memory, the fence register is set to the highest address occupied by OS code. Each memory
address generated by a user process is compared against the fence. Any attempt to read or write the space below the
fence may thus be detected & denied before completion of the related memory reference. Such violations usually trap to
the OS, which in turn may abort the offending program. To serve the purpose of protection, modification of the fence
register must be a privileged operation not executable by user processes. Consequently, this method requires the
hardware ability to distinguish between execution of the OS & of user processes, such as the one provided by user &
supervisor modes of operation.
possible. Besides sacrificing speed & functionality, such programs usually take
P0 OS Area
100K
P1
300K
P2 Process Pi
400K
P3
Process Pj
700K
P4
800K
P5 Process Pk
1000K
partitioning is used, only the status field of each entry varies i.e. free or allocated,
in the course of system operation. Initially, all the entries are marked “FREE”. As
& when process is loaded into partitions, the status entry for that partition is
changed to “ALLOCATED”.
Initially, all memory is available for user processes & is called hole. On arrival of
a process, a hole large enough for that process is allocated to it. The OS then
reads the program image from disk to the space reserved. After becoming
resident in memory, the newly loaded process makes a transition to the ready
Physical
Lesson No. 1 Intro. to Operating System 116 Memory
Figure 5 – Dynamic relocation
Relocation of memory references at run-time is illustrated by means of the
instruction LDA 500, which is supposed to load the contents of the virtual
address 500 (relative to program beginning) into the accumulator. As indicated,
the target item actually resides at the physical address 1500 in memory. This
address is produced by hardware by adding the contents of the base register to
the virtual address given by the processor at run-time.
As suggested by Figure 4, relocation is performed by hardware & is invisible to
programmers. In effect, all addresses in the process image are prepared by
counting on the implicit based addressing to complete the relocation process at
run-time. This approach makes a clear distinction between the virtual & the
physical address space.
This is the most commonly used scheme amongst the schemes using fixed
partitions due to its enhanced speed & flexibility. Its advantage is that it supports
swapping easily. Only the base register value needs to be changed before
dispatching.
5.2.2.5 Protection
Not only must the OS be protected from unauthorized tampering by user processes, but each user process must also be
prevented from accessing the areas of memory allocated to other processes. Otherwise, a single erroneous or malevolent
process may easily corrupt any or all other resident processes. There are two approaches for preventing such
interference & achieving protection. These approaches involve the use of Limit Register & Protection Bits.
influenced by the available hardware support. In systems that use base registers
for relocation, a common practice is to use limit registers for protection. The
beyond the boundary assigned to the executing program by the OS. The limit
this way, any attempt to access a memory location outside of the specified area
is detected & aborted by the protection hardware before being allowed to reach
the memory. This violation usually traps to the OS, which may then take a
remedial action, such as to terminate the offending process. The base & limit
values for each process are normally kept in its PBC. Upon each process switch,
the hardware base & limit registers are loaded with the values required for the
rights in the memory itself. The bit-per-word approach described earlier, is not
suitable for multiprogramming systems because it can separate only two distinct
address spaces. Adding more bits to designate the identity of each word’s owner
may solve this problem, but this approach is rather costly. A more economical
version of this idea has been implemented by associating a few individual words.
For example, some models of the IBM 360 series use four such bits, called keys,
per each 2 KB block of memory. When a process is loaded in memory, its identity
is recorded in the protection bits of the occupied blocks. The validity of memory
identity to the contents of protection bits of the memory block being accessed. If
no match is found, the access is illegal & hardware traps to the OS for
unique "master" key, say 0, that gives it unrestricted access to all blocks of
operating-system designers. For example, with 4-bit keys the maximum number
bits with fixed-sized blocks forces partition sizes to be an integral number of such
blocks.
Base Register 0
<= YES
+
CPU
NO
are not defined at the time of system generation. Starting with the initial state of
the system, partitions are created dynamically to fit the needs of each requesting
process. When a process departs, the memory manager returns the vacated
space to the pool of free memory areas from which partition allocations are
One way to solve this problem is to use relative references instead of absolute
references within shared code. For example, the jump in question may read JMP
$+50, where $ denotes the address of the JMP instruction. Since it is relative to
the program counter, the JMP is mapped properly when invoked by either
process, that is, to virtual address 1600 or 100, respectively. At run-time,
however, both references map to the same physical address, 5600, as they
should. This is illustrated in Figure 9.
Code that executes correctly regardless of its load address is often referred to as position-independent code. One of its
properties is that references to portions of the position-independent code itself are always relative, say, to the program
counter or to a base when based addressing is used. Position-independent coding is often used for shared code, such as
memory-resident subroutine libraries. In our example, use of position-independent code solves the problem of self-
referencing.
0 Base
5500
50 SUB Register
JMP $+50
50+50
+
500 5600
Base
5500
Register (b) Process B
0
Call Sub (call 0) +
1800 5500
2000
Figure 9 – Accessing shared code (a) Process A (b) Process B
Position-independent coding is one way to handle the problem of self-referencing
of shared code. The main point of our example, however, is that sharing of code
is more restrictive than sharing of data. In particular, both forms of sharing of
code is more restrictive than sharing of data. In particular, both forms of sharing
require the shared object to be accessible from all address spaces of which it is a
part. ;in addition, shared code must also be reentrant or executed on a mutually
exclusive basis, & some special provisions-such as position-independent coding-
must be made in order to ensure proper code references to itself. Since ordinary
(non-shared) code does not automatically meet these requirements, some
special language provisions must be in place, or assembly language coding may
5.2.3.5 Evaluation
Wasted memory: This memory management scheme wastes less memory
than fixed partitions because there is no internal fragmentation as the partition
size can be of any length. By using compaction, external fragmentation can
also be eliminated.
Access Time: Access time is same as of fixed partitions as the same scheme
of address translation using base register is used.
Time Complexity: Time complexity is higher in variable partitions due to
various data structures & algorithms used, for eg: Partition Description Table
(PDT) is no more of fixed length.
5.2.4 SEGMENTATION
The external fragmentation & its negative impact should be reduced in systems where the average size of a request for
allocation is smaller. OS cannot reduce the average process size, but a way to reduce the average size of a request for
memory is to divide the address space of a single process into blocks that may be placed into noncontiguous areas of
memory. This can be accomplished by segmentation. Segmentation provides breaking of an address space into several
logical segments, dynamic relocation & sophisticated forms of protection & sharing.
main …
.
. Segment Map
CODE ENDS
SHARED SEGMENT Segment# Size Type
ssub1 … 0 d data
. 1 500 stack
. 2 c code
ssub2 3 s code
SHARED ENDS
Figure 10 - Segments
Segmentation is quite natural for programmers who tend to think of their programs in terms of logically related entities,
such as subroutines & global or local data areas. A segment is essential a collection of such entities. The segmented
address space of a single process is illustrated in Figure 10(a). In that particular example, four different segments are
defined: DATA, STACK, CODE, & SHARED. Except for SHARED, the name of each segment is chosen to indicate the
type of information that it contains. The STACK segment is assumed to consist of 500 locations reserved for stack. The
SHARED segment consists of two subroutines, SSUB1 & SSUB2, shared with other processes. The definition of the
segments follows the typical assembly-language notation, in which programmers usually have the freedom to define
segments directly in whatever way they feel best suits the needs of the program at hand. As a result, a specific process
may have several different segments of the same generic type, such as code or data. For example, both CODE &
SHARED segments contain executable instructions & thus belong to the generic type "code".
limit base
Segment
table
CPU s d
yes
< + Physical
memory
no
into physical memory, & the resulting SDT formed by the OS. With the physical
address is used to index the segment descriptor table & to obtain the physical
base address of the related segment. Adding the offset of the desired item to the
base of its enclosing segment then produces the physical address. This process
is illustrated in Figure 11 for the example of the virtual address (3100). To access
Segment 3, the number 3 is used to index the SDT & to obtain the physical base
address, 20000, of the segment SHARED. The size field of the same segment
of its enclosing segment. If so, the base & offset are added to produce the target
physical address. In our example that value is 20100, the first instruction word of
In general, the size of a segment descriptor table is related to the size of the
virtual address space of a process. For example, Intel's iAPX 286 processor is
capable of supporting up to 16K segments of 64 KB each per process, thus
requiring 16K entries per SDT. Given their potential size, segment descriptor
tables are not kept in registers. Being a collection of logically related items, the
SDTs themselves are often treated as special types of segments. Their
accessing is usually facilitated by means of a dedicated hardware register called
the segment descriptor table base register (SDTBR), which is set to point to the
base of the running process's SDT. Since the size of an SDT may vary from a
few entries to several thousand, another dedicated hardware register, called the
segment descriptor table limit register (SDTLR), is provided to mark the end of
the SDT pointed to by the SDTBR. In this way, an SDT need contain only as
many entries as there are segments actually defined in a given process.
Attempts to access nonexistent segments may be detected & dealt with as
nonexistent-segment exceptions.
From the OS's point of view, segmentation is essentially a multiple-base-limit
version of dynamically partitioned memory. Memory is allocated in the form of
variable partitions; the main difference is that one such partition is allocated to
each individual segment. Bases & limits of segments belonging to a given
process are collected into an SDT are normally kept in the PCB of the owner
process. Upon each process switch, the SDTBR & SDTLR are loaded with the
base & size, respectively, of the SDT of the new running process. In addition to
the process-loading time, SDT entries may also need to be updated whenever a
process is swapped out or relocated for compaction purposes. Swapping out
requires invalidation of all SDT entries that describe the affected segments.
When the process is swapped back in, the base fields of its segment descriptors
(DATA1)
RW
EO
EMCS
SD1
BASE SIZE ACCESS RIGHT
DATA2
RW
EO
RW
RW DATA3
SD3
SD2
MEMORY
maintain its memory residence while being actively used by any of the processes
authorized to reference it. Swapping in this case opens up the possibility that a
participating process may be swapped out while its shared segment remains
resident. When such a process is swapped back in, the construction of its SDT
must take into consideration the fact that the shared segment may already be
resident. In other words, the OS must keep track of shared segments & of
any, & to ensure its proper mapping from all virtual address spaces of which it is
a part.
5.3 Keywords
Contiguous Memory Management: In this approach, each program occupies a
single contiguous block of storage locations.
First-fit: This allocates the first available space that is big enough to
accommodate process.
6.0 OBJECTIVE
The objective of this lesson is to make the students familiar with the following
concepts of (1) Non-contiguous memory management, (2) Paging, and (3) Virtual
memory.
6.1 INTRODUCTION
In noncontiguous memory management memory is allocated in such a way that
parts of single logical object may be placed in noncontiguous areas of physical
memory, whereas the address space of a process always remains contiguous. At
run time contiguous virtual address space is mapped to noncontiguous physical
address space. This type of memory management is done in various ways:
4. Non-Contiguous, real memory management system
¾ Paged memory management system
¾ Segmented memory management system
¾ Combined memory management system
5. Non-Contiguous, virtual memory management system
¾ Virtual memory management system
6.2 Presentation of contents
6.2.1 Paging
6.2.1.1 Principles of Operation
6.2.1.2 Page Allocation
6.2.1.3 Hardware Support for Paging
6.2.1.4 Protection & Sharing
6.2.2 Virtual Memory
6.2.2.1 Principles of Operation
6.2.2.2 Management of Virtual Memory
6.2.2.4 Program Behavior
6.2.2.5 Replacement Policies
Figure 1 - Paging
The virtual-address space of a sample user process that is 14,848 bytes (3A00H)
long is divided into four virtual pages numbered from 0 to 3. A possible
placement of those pages into physical memory is depicted in Figure 1. The
mapping of virtual addresses to physical addresses in paging systems is
performed at the page level. Each virtual address is divided into two parts: the
page number & the offset within that page. Since pages & page frames have
identical sizes, offsets within each are identical & need not be mapped. So each
24-bit virtual address consist of a 12-bit page number (high-order bits) & a 12-bit
offset within the page.
Address translation is performed with the help of the page-map table (PMT),
constructed at process-loading time. As indicated in figure 1, there is one PMT
entry for each virtual page of a process. The value of each entry is the number of
the page frame in the physical memory where the corresponding virtual page is
placed. Since offsets are not mapped, only the page frame number need be
stored in a PMT entry. E.g., virtual page 0 is assumed to be placed in the
After selecting n free page frames, the OS loads process pages into them &
constructs the page-map table of the process. Thus, there is one MMT per
system, & as many PMTs as there are active processes. When a process
K 600
No
PMT
As indicated, the TLB entries contain pairs of virtual page numbers & the
corresponding page frame numbers where the related pages are stored in
physical memory. The page number is necessary to define each particular entry,
because a TLB contains only a subset of page-map table entries. Address
translation begins by presenting the page-number portion of the virtual address
to the TLB. If the desired entry is found in the TLB, the corresponding page frame
number is combined with the offset to produce the physical address.
Alternatively, if the target entry is not in TLB, the PMT in memory must be
accessed to complete the mapping. This process begins by consulting the
PMTLR to verify that the page number provided in the virtual address is within
the bounds of the related process's address space. If so, the page number is
added to the contents of the PMTBR to obtain the address of the corresponding
6.4 SUMMARY
The memory-management layer of an OS allocates & reclaims portions of main
memory in response to requests from other users & from other OS modules, & in
accordance with the resource-management objectives of a particular system.
Processes are created & loaded into memory in response to scheduling
decisions that are affected by, among other things, the amount of memory
available for allocation at a given instant. Memory is normally freed when
resident objects terminate. When it is necessary & cost-effective, the memory
manager may increase the amount of available memory by moving inactive or
low-priority objects to lower levels of the memory hierarchy (swapping).
Thus, the memory manager interacts with the scheduler in selecting the objects
to be placed into or evicted from the main memory. The mechanics of memory
7.0 Objectives
We know that the processor and memory of the computer should be scheduled such that
the system operates more efficiently. Another very important scheduler is the disk
scheduler. The disk can be considered the one I/O device that is common to every
computer. Most of the processing of computer system centers on the disk system. Disk
provides the primary on-line storage of information, both programs and data. All the
important programs of the system such as compiler, assemblers, loaders, editors, etc. are
stored on the disk until loaded into memory. Hence it becomes all-important to properly
Disk comes in many sizes and speeds, and information may be stored optically or
magnetically. However, all disks share a number of important features. A disk is a flat
circular object called a platter. Information may be stored on both sides of a platter
(although some multiplatter disk packs do not use the top most or bottom most surface).
Platter rotates around its own axis. The circular surface of the platter is coated with a
Spindle
Platters
Read/write
heads
The read write head can move radially over the magnetic surface. For each position of the
head, the recorded information forms a circular track on the disk surface. Within a track
information is written in blocks. The blocks may be of fixed size or variable size
separated by block gaps. The variable length size block scheme is flexible but difficult to
implement. Blocks can be separately read or written. The disk can access any information
randomly using an address of the record of the form (track no, record no.). When the disk
On floppy disks and hard disks, the media spins at a constant rate. Sectors are organized
into a number of concentric circles or tracks. As one moves out from the center of the
disk, -the tracks get larger. Some disks store the same number of sectors on each track,
with outer tracks being recorded using lower bit densities. Other disks place more sectors
on outer tracks. On such a disk, more information can be accessed from an outer track
12 14 23 50 58 64 86 89 120 140
12 14 23 50 58 64 86 89 120 140
7.2.2.3 Scan
In this algorithm the read/write head moves back and forth between the
innermost and outermost tracks. As the head gets to each track, satisfies all
outstanding requests for that track. In this algorithm also, starvation is possible
only if there are repeated requests for the current track.
The scan algorithm is sometimes called the elevator algorithm. As it is familiar to
the behavior of elevators as they service requests to move from floor to floor in a
building.
12 14 23 50 58 64 86 89 120 140
12 14 23 50 58 64 86 89 120 140
7.2.2.5 Look
This algorithm is also similar to scan but unlike scan, the head does not
unnecessarily travel to the innermost track and outermost track on each circuit. In
this algorithm, head moves in one direction, satisfying the request for the closest
track like scan in that direction. When there are no more requests in that direction
the head is traveling, head reverse the direction and repeat.
7.2.2.7 F-Scan
The "F" stands for "freezing" the request queue at a certain time. It is just like N-
step scan but there are two sub queues only and each is of unlimited length.
In case of disk with one read/write head, the objective of the entire scheduling
algorithm is to minimize the seek time by minimizing the movement o read/write
7 0 7 0 5 0
6 1 3 4 2 3
5 2 6 1 7 6
4 3 2 5 4 1
7.4 Keywords
¾ Seek time: To access a block from the disk, first of all the system has to
move the read/write head to the required position. The time consumed in
moving the head to access a block from the disk is known as seek time.
¾ Latency time: the time consumed in rotating the disk to move the desired
block under the read/write head.
¾ Transfer time: The consumed in transferring the data from the disk to the
main memory is known as transfer time.
7.5 SELF ASSESMENT QUESTIONS (SAQ)
1. Compare the throughput of scan and C-scan assuming a uniform
distribution of requests.
8.0 Objectives
The objective of this lesson is get the students acquainted with the concepts of
process synchronization. This lesson will make them familiar with:
(a) Critical section
(b) Mutual exclusion
(c) Classical coordination problems
8.1 Introduction
Since processes frequently need to communicate with other processes therefore,
there is a need for a well-structured communication, without using interrupts,
among processes. Processes use two kinds of synchronization to control their
activities; (a) Control synchronization: it is needed if a process waits to perform
some action only after some other processes have executed some action, (b)
Data access synchronization: It is used to access shared data in a mutually
exclusive manner. The basic technique used to implement this synchronization is
to block a process until an appropriate condition is fulfilled. In this lesson
synchronization in concurrent processes is discussed. Some classical
coordination problems such as dining philosopher problem, producer-consumer
problem etc are also discussed. These classical problems are the abstractions of
the synchronization problems observed in operating systems.
8.2 Presentation of contents
8.2.1 Race Conditions
8.2.2 Critical Section
8.2.3 Mutual Exclusion
8.2.3.1 Mutual Exclusion Conditions
8.2.3.2 Proposals for Achieving Mutual Exclusion
8.2.4 Classical Process Co-Ordination Problems
its critical section, no other process is to be allowed to execute in its critical section.
Thus, the execution of critical sections by the processes is mutually exclusive in time. So
a critical section for a data item d is defined as a section of code, which cannot be
Consider a system consisting of n processes {P0, P1, ..., Pn-i}. Each process has
a segment of code, called a critical section, in which the process may be
changing common variables, updating a table, writing a file, and so on. The
important feature of the system is that, when one process is executing its critical
section, no other process is to be allowed to execute in its critical section. Thus,
the execution of critical sections by the processes is mutually exclusive in time.
The critical-section problem is to design a protocol that the processes can use to
co-operate. Each process must request permission to enter its critical section.
The section of code implementing this request is the entry section. An exit
Critical section
remainder section
until FALSE
Properties of critical section
¾ Correctness: At most one process may execute a critical section at any given moment.
¾ Progress: When a critical section is not in use, one of the processes visiting to enter it
will be granted entry to the critical section. If no process is executing in its critical
section and there exist some processes that wish to enter their critical sections, then
only those processes that are not executing in their remainder section can participate
in the decision of which will enter its critical section next. Moreover, this decision
cannot be postponed indefinitely. So if no process is in critical section, one can decide
quickly who enters and only one process can enter the critical section so in practice,
others are put on the queue.
¾ Bounded wait: After a process p has indicated its desire to enter a critical section, the
number of times other processes gain entry to the critical section ahead of p is
bounded by a finite integer. So there must exist a bound on the number of times that
other processes are allowed to enter their critical sections after a process has made a
request to enter its critical section and before that request is granted. The wait is the
time from when a process makes a request to enter its critical section until that
request is granted. In practice, once a process enters its critical section, it does not get
another turn until a waiting process gets a turn (managed as a queue)
¾ Deadlock freedom: The implementation is free of deadlock.
Process 1 Process 2
The shared variable turn is used to indicate which process can enter the critical
section next. Let process p1 wish to enter the Critical Section. If turn=1, p1 can
enter straightway. After completing the Critical Section, it sets turn to 2 so as to
enable process p2 to enter the Critical Section. If p1 finds turn=2 when it wishes
to enter Critical Section, it waits in the while loop until p2 exits from the critical
section and executes the assignment turn=1. Thus processes may encounter a
busy wait before gaining entry to the Critical Section.
Taking turns is not a good idea when one of the processes is much slower than
the other. Suppose process 1 finishes its critical section quickly, so both
processes are now in their non-critical section. This situation violates above-
mentioned condition 3(i.e. No process outside its critical section should block
other processes.). E.g. let process p1 be in critical section and p2 in the
to mutual exclusion. The disadvantage of the easy way out is that you give up reader
{ {
repeat repeat
{read} {write}
Then Then
forever forever
} }
Readers Writers
¾ Shared data.
¾ A set of atomic operations on that data.
¾ A set of condition variables.
Each monitor has one lock. It acquires lock when begin a monitor operation, and
releases lock when operation finishes. It statically identifies operations that only
Crises and deadlocks when they occur have at least this advantage that
they force us to think.”- Jawaharlal Nehru (1889 - 1964)
9.0 Objectives
The objectives of this lesson are to make the students acquainted with the
problem of deadlocks. In this lesson, we characterize the problem of deadlocks
and discuss policies, which an OS can use to ensure their absence. Deadlock
detection, resolution, prevention and avoidance have been discussed in detail in
the present lesson.
After studying this lesson the students will be familiar with following:
(a) Condition for deadlock.
(b) Deadlock prevention
(c) Deadlock avoidance
(d) Deadlock detection and recovery
9.1 Introduction
If a process is in the need of some resource, physical or logical, it requests the
kernel of operating system. The kernel, being the resource manager, allocates
the resources to the processes. If there is a delay in the allocation of the
resource to the process, it results in the idling of process. The deadlock is a
situation in which some processes in the system faces indefinite delays in
resource allocation. In this lesson, we identify the problems causing deadlocks,
and discuss a number of policies used by the operating system to deal with the
problem of deadlocks.
9.2 Presentation of contents
9.2.1 Definition
9.2.2 Preemptable and Nonpreemptable Resources
9.2.3 Necessary and Sufficient Deadlock Conditions
9.2.4 Resource-Allocation Graph
R1 R2
O O
P1 P2 P3
O O
R3 R4
If the graph does contain a cycle, then a deadlock does exist. As following
resource allocation graph depicts a deadlock situation.
P1 P2 P3
O O
R3 R4
In the above figure, we see four customers each of whom has been granted a
number of credit units. The banker reserved only 10 units rather than 22 units to
service them. At certain moment, the situation becomes
Customers Used Max
A 1 6
B 1 5 Available
C 2 4 Units = 2
D 4 7
Safe State The key to a state being safe is that there is at least one way for all
users to finish. In other analogy, the state of figure 2 is safe because with 2 units
10.0 Objectives
The objective of this lesson is
(A) To give an overview of the important features of UNIX operating system to
the students.
(B) To make the students familiar with some important UNIX commands.
10.1 Introduction
UNIX is written in a high-level language giving it the benefit
of machine independence, portability, understandability, and
modifiability. Multi-tasking (more than one program can be
made to run at the same time) and multi-user (more than one
user can work at the same computer system at the same time)
are the two most important characteristics of UNIX helping it
in gaining widespread acceptance among a large variety of
users. It was the first operating system to bring in the
concept of hierarchical file structure. It uses a uniform
format for files called the byte stream making the application
programs to be written easily. UNIX treats every file as a
stream of bytes so the user can manipulate his file in the
manner he wants. It provides primitives that allow more
complex and complicated programs to be built from the simpler
ones. It provides very simple user interface both character-
based and graphical based. It hides the machine architecture
from the user. This helps the programmer to write different
programs that can be made to run on different hardware
configurations. It provides a simple, uniform interface to
peripheral devices.
10.2 Presentation of contents
PSW
FILES RESOURCE
UNIX KERNEL
Figure 10.2: A Process in UNIX
The text segment contains the compiled object instructions, the data segment
contains static variables, and the stack segment holds the runtime stack used to
store temporary variables. A set of source file that is compiled and linked-into an
executable form is stored in a file with the default name of a. out. If the program
references statically define data, such as C static variables, a template for the data
segment is maintained in the executable file. The data segment will be created and
initialized to contain values and space for variables when the executable file is
loaded and executed. The stack segment is used to allocate storage for dynamic
elements of the program, such as automatic C variables that are created when they
come into scope and are destroyed when pass out of scope.
The compiler and linker create the executable file. These utilities do not define a
process; they define only the program text and a template for the data component
that the process will use when it executes the program. When the loader loads a
program into the computer's memory, the system creates appropriate data and stack
segments, called a process.
A process has a unique process identifier (PID), a pointer to a table of process
descriptors used by the UNIX OS kernel to reference the process's descriptor.
Whenever one process references another process in a system call, it provides the
pointer of the target process. The UNIX pa command lists each pm associated with
the user executing the command. The pm of each process appears as a field in the
descriptor of each process.
UNIX command for creating a new process is the fork system call. Whenever a
process calls fork, a child process is created with its descriptor, including its own
root
Figure 10.4
Figure 10.4 shows a typical UNIX File System. The file system is organized as a
tree with a single root node called the root (written "/ "); every non-leaf node of the
file system structure is a directory, and leaf nodes of the tree are either directories or
regular files or special devices.
¾ The /bin directory contains the executable files for most UNIX commands.
¾ The /etc directory contains other additional commands that related to system
maintenance and administration. It also contains several files, which store the
relevant information about the users of the system, the terminals and devices
connected to the system.
¾ The /lib directory contains all the library functions provided by UNIX for the
programmers.
¾ The /dev directory stores files that are related to the devices. UNIX has a file
associated with each of the I/O devices.
¾ The /user directory is created for each user to have a private work area where
the user can store his files. This directory can be given any name. Here it is
named as "user".
¾ The /tmp directory is the directory into which temporary files are kept. The files
stored in this directory are deleted as soon as the system is shutdown and
restarted.
This idea of creating a new process to execute a computation may seem like overkill, but it
has a very important characteristic. When the original process decides to execute a new
computation, it protects itself from any fatal errors that might arise during that execution. If
it did not use a child process to execute the command, a chain of fatal errors could cause the
initial process to fail, thus crashing the entire system.
The Bourne shell and others accept a command line from the user, parse the command line,
and then invoke the OS to run the specified command with the specified arguments. When a
user passes a command line to the shell, it is interpreted as a request to execute a program in
the specified file - even if the file contains a program that the user wrote. That is, a
programmer can write an ordinary C program, compile it, then have the shell execute it just
like it was a normal UNIX command.
For example, you could write a C program in a file named main.c, then compile and execute
it with shell commands like
$ cc main.c
$ a.out
The shell finds the cc command (the C compiler) in the /bin directory, then passes it
the string "main.c" when it creates a child process to execute the cc program. The C
compiler, by default, translates the C program that is stored in main.c, then writes
the resulting executable program into a file named a.out in the current directory. In
the second command, the command line is just the name of the file to be executed,
a.out (without any parameters). The shell finds the a.out file in the current directory,
then executes it.
Consider the detailed steps that a shell must take to accomplish its job:
¾ Printing a prompt. There is a default prompt string, sometimes hard coded into
the shell, e.g., the single character string "%", "#", ">" or other. When the shell is
started, it can look up the name of the machine on which it is running, and
prepare this string name to the standard prompt character, for example giving a
At this point, you are ready to enter your first UNIX command. Now, when you are
done working on your UNIX system and decide to leave your terminal - then it is
always a good idea to log off the system. In order to log off the system, type the
following command:
$ exit
login:
The above command will work if you are using a Bourne or a Korn shell. However, if
you are working on C shell, you can give another command to log off.
$ logout
login:
UNIX system is very particular not to allow the unauthorized users to access the
system. So, when a message like 'Login denied' comes on the screen, it does not
tell you what was wrong with your login.
10.2.8.2 Changing your Password
You can change your password with the 'passwd' command. The procedure of
changing passwords is very simple. In order to change your password, you first
have to log on to the UNIX system. Then issue the 'passwd' command at the UNIX
prompt.
Syntax: passwd [user-name] Options
-d Deletes your password
-x days. This sets the maximum number of days that the password will be date
active. After the specified number of days you will be required to give a new
password.
-n days. This sets the minimum number of days the password has to be active,
before it can be changed.
-s: This gives you the status of the user's password.
Only the superuser can use the above options.
Example:
$passwd
$ passwd:
Changing password for shefali
Enter old password:
Enter new password:
Re-type new password:
Mismatch-password unchanged
UNIX also offers a variety of tools to maintain security .One such tool is the usage of
the 'lock' command. The lock command locks your keyboard till the time you enter a
valid password, as shown below:
$lock
Password: Sorry
Password:
10.2.8.3 UNIX Command Structure
There are a few of UNIX commands, that you can type them standalone. For
example, ls, date, pwd, logout and so on. But UNIX commands generally require
some additional options and/ or arguments to be supplied in order to extract more
information. Let us find out the basic UNIX command structure. The UNIX
commands follow the following format: Command [options] [arguments]
The options/arguments are specified within square brackets if they are optional. The
options are normally specified by a “-“ (hyphen) followed by letter, one letter per
option.
10.2.9 Common UNIX Commands
Some commonly used UNIX commands are discussed below:
Normally, when you create a file, you are the owner of the file and your group
becomes the group id for the file. The system assigns a default set of permissions
for file, as set by the system administrator. The user can also change these
11.0 Objectives
The objective of this lesson is
(C) To give an overview of the important features of MS-DOS operating system.
(D) To make familiar with some important MS-DOS commands.
11.1 Introduction
MS-DOS is a single-user operating system. It is designed to operate on
machines using the Intel line of 8086 microprocessors. These processors include
the 8088, 8086, 80286, 80386, 80486, and the new Pentium (80586). PCs that
are called 386s or 486s; are based on their processor names, 80386 and 80486.
The 80586 was so different from the previous versions that it was given the name
Pentium as a distinction. MS-DOS cannot support a large network of users.
Because PCs and MS-DOS became so popular, many network operating
systems were designed in the mid-1980s to network MS-DOS machines
together; working around the operating system limitations through software. MS-
DOS is the preferred operating system for most of the Intel processor (currently
Pentium) based PCs of the world. MS-DOS does not break out into neat
compartments as easily as some operating systems do. This is partially due to its
simplicity -- no multi-user or multitasking ability. It is also partly due to the way
MS-DOS has evolved over the years.
So in many respects DOS was a primitive OS. It was based on previous systems,
of course, and echoes of UNIX and CP/M can be seen in it. But if you read
Stephenson's article above, you will realize that it had hidden power, as well,
because you could interact more directly with the components of the computer
than you can with more modern operating systems. It is this power that makes it
valuable to know DOS today. The majority of computer users today use graphical
11.2.1 Kernel
MS-DOS uses two hidden files at boot time. These are io.sys and msdos.sys. For
all practical purposes, these files, in conjunction with the firmware BIOS (Basic
Input Output Services) built into every PC, make up the MS-DOS kernel (basic
operating system). They load at the time of startup and allow the command
processor to run. These files are not rebuildable or alterable. Software to run
printers or CD-ROMs (device drivers) can be installed in MS-DOS but the kernel
cannot be changed.
11.2.2 COMMAND.COM
This file starts the command processor. The command processor uses the
commands you enter at the C:> prompt. When you run application programs and
then return to MS-DOS, the system must be able to find COMMAND.COM and
reload it back into memory (RAM). The command processor also supports a
command language knows as the DOS Batch language. The Batch language
files have a .BAT extension and are considered executable by the command
processor. The Batch language is not as powerful as some command languages
but does support conditional statements and variables (if time = next_day
then...). In the early years of PCs, the Batch language was used for many tasks.
Now, low cost utilities often provide many times the functions in addition to direct
support. However, batch files are still very common and very useful.
11.2.5 Limitations
MS-DOS is limited in its memory usage. From the beginning, MS-DOS was
designed to allow only the first 640 kilobytes of memory (RAM) to be used for
application programs, even though only 1 megabyte of memory was
addressable. To current MS-DOS users, 640 KB has proven to be inadequate as
well. The latest version (as of this writing), MS-DOS 6.0, has special memory
management tools and techniques built into allow programs additional memory.
Additionally, and entire counter-culture of memory management tools have been
developed which all programs to use more memory under previous versions of
MS-DOS. These include Quarterdeck’s QEMM386, Qualitas Inc. 386 M, and
Pharlap for the 486.
You create a batch file by using an ASCII text editor, such as DOS EDIT, or
Windows Notepad. When you have created the batch file, you save it with a file
name, and give it the extension *.bat. Note that you must not use a name that is
the same as any DOS commands or any other program or utility you are likely to
run. If you use a DOS command name, trying to run your batch file will not work
because the DOS command will execute first. If your name matches some other
program or utility, you may never be able to run that program again because your
batch file will run before the program runs. So pick something that is not likely to
match any command name or program file name.
Virtually all internal and external commands can be used in a batch file. The few
exceptions are the commands that are intended only for configuration which are
used in the CONFIG.SYS file. Examples of these include BUFFERS, COUNTRY,
DEVICE, etc.
When you create a batch file, you are beginning to write a program, essentially.
DOS batch files may not have the power of a structured programming language,
but they can be very handy for handling quick tasks. The one good habit for any
programmer is to put comments in the program that explain what the program is
doing. To do so place REM at the beginning of a comment line. The OS will then
ignore that line entirely when it executes the program, but anyone who looks at
the "source code" in the batch file can read your comments and understand what
it is doing.
To have a batch file executed automatically every time MS-
DOS starts, create an AUTOEXEC.BAT file in the root directory.
Common commands used in batch file are:
(a) CALL - Call another batch file with parameters.
(b) ECHO - Display command names or messages as commands are executed
The boot sector contains all the information MS- DOS needs to interpret the structure of the
disk. The FAT allocates disk clusters to files, each cluster may contain many sectors, but this
number is fixed when the disk is formatted and it must be a power of 2. A file's clusters are
chained together in the FAT. There is an extra copy of the FAT is for integrity and
reliability. The root directory organizes the files on the disk into directories and
The FORMAT command causes the following information to be placed in the BIOS
parameter block (BPB) in the boot sector:
i. sector size in bytes
ii. sectors on the disk
iii. sectors per track
iv. cluster size in bytes
v. number of FATs
vi. sectors per FAT
vii. number of directory entries
viii. number of heads
ix. hidden sectors
x. reserved sectors
xi. media identification code.
The boot sector is logical sector 0 of every logical volume and is created when the
disk is formatted. It contains OEM identification, the BIOS parameter block, and
bootstrap loader. When the system is booted, a ROM bootstrap program reads in
the first sector of the disk (this contains the disk bootstrap) and transfers control to it.
The boot strap loader reads MS-DOS BIOS to memory and transfers control to it.
CHKDSK compares the two FATs to make sure they are identical. MS-DOS
maintains copy of the FAT in memory to be able to search it quickly. Each cluster
entry in the FAT contains codes to indicate the following:
Clusters are allocated to files one at a time on demand. Because of the continuing addition,
modification, and deletion of files, the disk tends to become fragmented with free clusters
dispersed throughout the disk. Clusters are allocated sequentially, but in-use clusters are
skipped. No attempt is made to reorganize the disk so that files would consist of sequential
clusters, but this can be accomplished via the MS-DOS commands.
MS-DOS uses memory buffers with disk input/output. With file handle calls, the
location of the buffer may be specified in the call. With FCB calls, MS-DOS uses a
preset buffer called the disk transfer area, which is normally in a program's program
segment and is 118 bytes long.
i. Read-only file (an attempt to open the file for writing or deletion will fail)
ii. Hidden file (excluded from normal searches)
iii. System file (excluded from normal searches)
iv. Volume label (can exist only in the root directory)
v. Subdirectory (excluded from normal searches)
vi. Archive bit (set to "on" whenever a file is modified)
MS-DOS function calls are available to search directories for files in a hardware
independent manner. FCB functions require that files be specified by a pointer to an
unopened FCB; these functions do not support the hierarchical file system. With the
file handle function request, it is possible to search for a file by specifying an ASCII
string; it is possible to search within any subdirectory on any drive, regardless of the
current subdirectory, by specifying a pathname.
Files may be accessed with FCB calls or file handle calls. File handle calls are
designed to work with a hierarchical file system. File handle calls are preferable, but
FCB calls are provided for compatibility with previous versions of DOS. File handle
calls support record locking and sharing. Users interested in writing programs that
will be compatible with future versions of MS-DOS should use handle calls rather
than the FCB calls.
New or replacement device drivers may be installed by using the DEVICE command
followed by the driver's file name in a CONFIG.SYS file on the boot disk. Thus, the
input/output system may be reconfigured at the command level. MS-DOS has
character device drivers and block device drivers. CON, AUX, and PRN are
character device drivers. The CONFIG.SYS file may be used to notify the operating
system of hardware device changes such as additional hard disks, a clock chip, a
RAM disk, or additional memory.
A filter is a program that processes input data in some particular way to produce
Background programs are dormant until they are activated by a signal from the
keyboard. A hot key, or combination of keys, signals a program to take control of
keyboard input.
MS-DOS begins the user's program segment in the lowest address free memory.
The program segment prefix (PSP) occupies the first 256 bytes of the program
segment area. The PSP points to various memory locations the program requires as
it executes.
MS-DOS creates a memory control block at the start of each memory area it
allocates. This data structure specifies the following:
MS-DOS may allocate a new memory block to a program, free a memory block, or
change the size of an allocated memory block. If a program tries to allocate a
memory block of a certain size, MS-DOS searches for an appropriate block. If such
a block is found, it is modified to belong to the requesting process. If the block is too
large, MS-DOS parcels it into an allocated block and a new free block. When a block
of memory is released by program, MS-DOS changes the block to indicate that it is
available. When a program reduces the amount of memory it needs, MS-DOS
creates a new memory control block for the memory being freed. The first memory
block of a program always begins with program segment prefix. Normally when a
program terminates, its memory is released. The program can retain its memory by
issuing function 31, TERMINATE BUT STAY RESIDENT.
MS-DOS handles hardware interrupts from devices and software interrupts caused
by executing instructions. An interrupt vector table causes control to transfer to the
appropriate interrupt handlers. Users may write their own interrupt handlers. The
Control-C interrupt normally terminates the active process and returns control to the
command interpreter.
Certain types of critical errors may occur that prevent a program from continuing.
For example, if a program tries to open a file on a disk drive without a disk, or a disk
drive whose door is open, a critical error is signaled. MS-DOS contains a critical
error handler, but user users may wish to provide their own routines for better
control over errors in specific situations.
11.3 Keywords
Batch file: It is an executable file that contains a group of commands that the user
wants to execute in sequence.
Device drivers: These are programs that control input and output.
Internal commands: These are commands, such as COPY and DIR that can be
handled by the COMMAND.COM program.
11.4 Summary
MS-DOS is a single-user, single-process operating system. Due to confinement of
device-independent code into lone layer, porting of MS-DOS is theoretically reduced
to writing of the BIOS code for the new hardware. Although early versions of MS-
DOS show resemblance to the CP/M operating system, later releases of MS-DOS
have Unix-like features. At the command level, MS-DOS provides a hierarchical file
system, 1/0 redirection, pipes and filters. User-written commands can be invoked in
the same way as standard system commands, thus giving the appearance of
extending the basic system functionality.
1. The Design of the UNIX Operating System, Bach M.J., PHI, New Delhi, 2000.
2. Operating System Concepts, 5th Edition, Silberschatz A., Galvin P.B.,
John Wiley & Sons.
3. Systems Programming & Operating Systems, 2nd Revised Edition,
Dhamdhere D.M., Tata McGraw Hill Publishing Company Ltd., New Delhi.
4. Operating Systems, Madnick S.E., Donovan J.T., Tata McGraw Hill
Publishing Company Ltd., New Delhi.
5. Operating Systems-A Modern Perspective, Gary Nutt, Pearson Education
Asia, 2000.
12.0 Objectives
The objectives of this lesson are:
(a) To provide a brief overview of the history of Windows operating system.
(b) To discuss the key features of Windows NT operating system.
12.1 Introduction
Microsoft Windows is the name of several families of software operating systems
by Microsoft. Microsoft first introduced an operating environment named
Windows in November 1985 as an add-on to MS-DOS in response to the
growing interest in graphical user interfaces (GUI). The term Windows
collectively describes several generations of Microsoft (MS) operating system
(OS) products categorized as follows:
12.1.1 16-bit operating environments
The early versions of Windows were often thought of as just graphical user
interfaces, because they ran on top of MS-DOS and used it for file system
services. However even the earliest 16-bit Windows versions have many typical
operating system functions, such as having their own executable file format and
providing their own device drivers for applications. Unlike MS-DOS, Windows
allowed users to execute multiple graphical applications at the same time,
through cooperative multitasking. Finally, Windows implemented an elaborate,
segment-based, software virtual memory scheme which allowed it to run
applications larger than available memory: code segments and resources were
swapped in and thrown away when memory became scarce, and data segments
moved in memory when a given application had left processor control, typically
waiting for user input. 16-bit Windows versions include Windows 1.0, Windows
2.0 and Windows/286.
additional dynamically linked libraries (DLLs) whenever they are needed. Thus the
logical view of Windows NT is quite different from the way the executable actually
appears in memory. It is best to use the logical view for considering different aspects of
WIN-32
Environment subsystem designers choose a target API (such as the Win 16) and then
build a subsystem to implement the API using the supervisor portion of Windows NT.
Microsoft has even chosen its own preferred API-the Win32 API-which is also the API
12.2.2.1 Objects
The NT Kernel defines a set of built-in object types. Some kernel object types are
instantiated by the Kernel itself to form other parts of the overall OS execution
image. These objects collectively save and manipulate the Kernel's state. Other
objects are instantiated and used by the Executive, subsystems, and application
code as the foundation of their computational model. That is, Windows NT and all of
its applications are managed at the Kernel level as objects.
Kernel objects are intended to be fast. They run in supervisor mode in a trusted
context, so there is no security and only limited error checking for Kernel objects, in
contrast to normal objects, which incorporate these features. However, Kernel
objects cannot be manipulated directly by user-mode programs, only through
function calls. Kernel objects are characterized as being either control objects or
dispatcher objects.
Exception
Dispatch Dispatcher
relative priority at one moment, but then be in the HIGH class and operating at the
BELOW NORMAL relative priority a little later. The thread’s class and the class’s
NORMAL relative priority define base priority. If the priority class is not REAL TIME,
then the thread's priority will be for one of the variable-level queues. In this case,
Windows NT might adjust priorities of threads in the variable level according to system
The thread scheduler is also preemptive. This means that whenever a thread
becomes ready to run, it is placed in a run queue at a level corresponding to its
current priority. If there is another thread in execution at that time and that thread
has a lower priority, then the lower-priority thread is interrupted and the new, higher-
priority thread is assigned the processor. In a single-processor system, this would
mean that a thread could cause itself to be removed from the processor by enabling
a higher-priority thread. In a multiprocessor system, the situation can be subtler.
Suppose that in a two-processor system, one processor is running a thread at level
10 and the other is running a thread at level 4. If the level 10 thread performs some
action that causes a previously blocked thread to suddenly become runnable at level
6, then the level 4 thread will be halted and the new level 6 thread will begin to use
the processor that the level 4 thread was using.
12.2.3 Multiprocess Synchronization
Single-processor systems can support synchronization by disabling interrupts.
However, Windows NT is designed to also support multiprocessors, so the Kernel
must provide an alternative mechanism to ensure that a thread executing on one
processor does not violate a critical section of a thread on another processor. The
Kernel employs spinlocks by which a thread on one process can wait for a critical
section by actively testing a Kernel lock variable to determine when it can enter the
critical section, if the hardware supports the test-and-set instruction. Spinlocks are
implemented using the hardware. Spinlock synchronization is used only within the
Kernel and Executive. User-mode programs use abstractions that are implemented
by the Executive.
12.2.3.1 The NT Executive
The NT Executive builds on the Kernel to implement the full set of Windows NT
policies and services, including process management, memory management, file
management, and device management. Windows NT uses object-oriented
The Executive Object Manager implements another object model on top of the
Kernel Object Manager, Whereas Kernel objects operate in a trusted environment,
Executive objects are used by other parts of the Executive and user-mode software
and must take extra measures to assure secure and reliable operation.
An Executive object exists in supervisor space, though user threads can reference
it. This is accomplished by having the Object Manager provide a handle for each
Executive object. Whenever a thread needs a new Executive object, it calls an
Object Manager function to create the object (in supervisor space), to create a
handle to the object (in the process's address space), and then to return the handle
to the calling thread.
Sometimes a second thread will want to use an Executive object that has already
been created. When the second thread attempts to create the existing object, the
Object Manager notes that the object already exists, so it creates a second handle
for the second thread to use to reference the existing Executive object. The two
threads share the single object. The Object Manager keeps a reference count of all
handles to an Executive object. When all outstanding handles have been closed, the
Executive object is deal located. Thus it is important for each thread to close each
handle it opens, preferably as soon as it no longer needs the handle.
handle is created to an object, the Object Manager updates the open handle information
and reference count. The object type information defines a standard set of methods that
the object implements such as open, close, and delete. Some of these methods are
supplied by the Object Manager, and some must be tailored to the object type; however,
The object body format is determined by the Executive component that uses the
object. For example, if the Executive object is a file object, the body format and
contents are managed by the File Manager part of the I/O Manager in the Executive.
12.2.3.1.2 Process and Thread Manager
Manager provides an Executive Process and Thread Manager serves the same purpose in
Windows NT that a process manager serves in any OS. It is the part of the OS responsible
for the following:
¾ Creating and destroying processes and threads
¾ Overseeing resource allocation
¾ Providing synchronization primitives
¾ Controlling process and thread state changes
¾ Keeping track of most of the information that the OS knows about each thread
KTHREAD NT-KERNEL
KPROCESS
The NtCreateThread Executive function creates a thread that can execute within the
process. (The Win32 API CreateProcess function calls both NtCreateProcess and
NtCreateThread; the CreateThread function calls NtCreateThread to create
additional threads within a process.) NtCreateThread performs the following work.
¾ Calls the Kernel to have it create a Kernel thread object.
¾ Creates and initializes an ETHREAD block.
¾ Initializes the thread for execution (sets up its stack, provides it with an
executable start address, and so on).
¾ Places the thread in a scheduling queue.
12.2.3.1.3 Virtual Memory Manager
Windows NT is a paging virtual memory system, which saves a process's address
space contents in secondary storage, loading portions of the image from the
secondary storage into the primary storage on a page-by-page basis whenever it is
needed.
When a process is created, it has 4 GB of virtual addresses available to it, though
none of the addresses are actually allocated at that time. When the process needs
space, it first reserves as much of the address space as it needs at that moment;
reserved addresses do not cause any actual space to be allocated; rather, virtual
addresses are reserved for later use. When the process needs to use the virtual
addresses to store information, it commits the address space, meaning that some
system storage space is then allocated to the process to hold information. A commit
operation causes space on the disk (in the process's page file) to be allocated to the
KTHREAD NT-KERNEL
KPROCESS
the system. The I/O Manager creates an abstraction of all device I/O operations on the
system so that the system's clients can perform operations on a common place data
structure.
DATA FLOW
FILTER DRIVER
INTERMEDIATE DRIVER
N
I/ T
O FILTER DRIVER E
M X
A E
N C
A
DEVICE DRIVER
U
G TI
E V
HAL
Lesson No. 1 Intro.R to Operating System 317 E
DEVICE
Figure 12.5 The I/O Manager
Drivers are the single component that can be added to the NT Executive to run a
low-level device in supervisor mode. The OS has not been designed to support third
party software, other than drivers, that want to add supervisor mode functionality. In
today's commercial computer marketplace, a consumer can buy a computer from
one vendor and then buy disk drives, graphic adapters, sound boards, and so on,
from other vendors. The OS must be able to accommodate this spectrum of
equipment built by different vendors. Therefore it is mandatory that the OS allow
third parties to add software drivers for each of these hardware components that can
be added to the computer.
The NT I/O Manager defines the framework in which device drivers, intermediate
drivers, file system drivers, and filter drivers are dynamically added to and removed
from the system and are made to work together. The dynamic Stream design allows
one to easily configure complex I/O systems.
The I/O Manager directs modules by issuing I/O request packets (IRPs) into a
stream. If the IRP is intended for a particular module, that module responds to the
IRP; otherwise, it passes the IRP to the next module in the stream. Each driver in
the stream has the responsibility of accepting IRPs, either reacting to the IRP if it is
directed at the driver or passing it on to the next module if it is not.
All information read from or written to the device is managed as a stream of bytes,
called a virtual file. Every driver is written to read and/or write a virtual file. Low-level
device drivers transform information read from the device into a stream & transform
stream information into a device-dependent format before writing it.
As a result of the design of the I/O system architecture, the API to the I/O subsystem
is not complex. For example, subsystems can use NtCreateFile or NtOpen to create
a handle to an Executive file object, NtReadFile and NtWriteFile to read and write an
open file, and Ntlock and NtUnlock to lock a portion of a file.
WIN 32 API
WIN 32 SUBSYSTEM
NATIVE API
NTOSKRNL
12.3 Keywords
Microkernel: It is a small nucleus of code comprising of the most essential OS
functions.
NT Kernel: It provides specific mechanisms for general object and memory
HAL: Hardware Abstraction Layer (HAL) is responsible for mapping various low-
level, processor-specific operations into a fixed interface that is used by the
Windows NT Kernel and Executive.
Virtual Memory Manager: It maps the virtual address referenced by the thread into
the physical executable memory.
I/O Manager: It is responsible for handling all the input/output operations to
every device in the system.
Cache Manager: It is designed to work with the Virtual Memory Manager and the
I/O Manager to perform read-ahead and write-behind on virtual files.
12.4 SUMMARY
Microsoft designed NT to be an extensible, portable operating system, able to take
advantage of new techniques and hardware. NT supports multiple operating
environments and symmetric multiprocessing. The use of kernel objects to provide
basic services, and the support for client-server computing, enable NT to support a
wide variety of application environments. For instance, NT can run programs
compiled for MS-DOS, Win16, Windows 95, NT, and POSIX. It provides virtual
memory, integrated caching, and preemptive scheduling. NT supports a security
model stronger than those of previous Microsoft operating systems, and includes