Modern Operating System
Modern Operating System
INTRODUCTION TO OPERATING
SYSTEMS
Unit Structure
1.0 Objectives
1.1 Introduction
1.2 OS and computer system
1.3 System performance
1.4 Classes of operating systems
1.4.1 Batch processing systems
1.4.1.1 Simple batch systems
1.4.1.2 Multi-programmed batch systems
1.4.2 Time sharing systems
1.4.3 Multiprocessing systems
1.4.3.1 Symmetric multiprocessing systems
1.4.3.2 Asymmetric multiprocessing systems
1.4.4 Real time systems
1.4.4.1 Hard and soft real-time systems
1.4.4.2 Features of a real-time operating systems
1.4.5 Distributed systems
1.4.6 Desktop systems
1.4.7 Handheld systems
1.4.8 Clustered systems
1.5 Let us sum up
1.6 Unit end questions
1.0 OBJECTIVES
1.1 INTRODUCTION
TABLE 1.1
KEY FEATURES OF CLASSES OF OPERATING SYSTEMS
This idea also applies to real life situations. You do not have
only one subject to study. Rather, several subjects may be in the
process of being served at the same time. Sometimes, before
studying one entire subject, you might check some other subject to
avoid monotonous study. Thus, if you have enough subjects, you
never need to remain idle.
1.4.2 TIME SHARING SYSTEMS
A time-sharing operating system focuses on facilitating quick
response to subrequests made by all processes, which provides a
tangible benefit to users. It is achieved by giving a fair execution
opportunity to each process through two means: The OS services
all processes by turn, which is called round-robin scheduling. It also
prevents a process from using too much CPU time when scheduled
to execute, which is called time-slicing. The combination of these
two techniques ensures that no process has to wait long for CPU
attention.
Feature Explanation
Concurrency A programmer can indicate that some parts of an
within an application should be executed concurrently with
application one another. The OS considers execution of each
such part as a process.
Process A programmer can assign priorities to processes.
priorities
Scheduling The OS uses priority-based or deadline-aware
scheduling.
Domain- A programmer can define special situations within
specific the external system as events, associate interrupts
events, with them, and specify event handling actions for
interrupts them.
Predictability Policies and overhead of the OS should be
predictable.
Reliability The OS ensures that an application can continue to
function even when faults occur in the computer.
2
2.0 OBJECTIVES
2.1 INTRODUCTION
2.3 ASSEMBLERS
2.4 INTERPRETERS
2.5 LINKERS
1. Define :
a. Translator
b. Assembler
c. Compiler
d. Interpreter
e. Linker
2. State the functions of an assembler.
3. Briefly explain the working of an interpreter.
4. Distinguish between Compiled versus interpreted
Languages.
5. What is a linker? Explain with the help of a diagram.
3
Unit Structure
3.0 Objectives
3.1 Introduction
3.2 Operating system services
3.2.1 Program execution
3.2.2 I/O Operations
3.2.3 File systems
3.2.4 Communication
3.2.5 Resource Allocation
3.2.6 Accounting
3.2.7 Error detection
3.2.8 Protection and security
3.3 User Operating System Interface
3.3.1 Command Interpreter
3.3.2 Graphical user interface
3.4 System calls
3.4.1 Types of system calls
3.4.1.1 Process control
3.4.1.2 File management
3.4.1.3 Device management
3.4.1.4 Information maintenance
3.4.1.5 Communications
3.4.1.6 Protection
3.5 System programs
3.5.1 File management
3.5.2 Status information
3.5.3 File modification
3.5.4 Programming-language support
3.5.5 Program loading and execution
3.5.6 Communications
3.5.7 Application programs
3.6 OS design and implementation
3.7 Let us sum up
3.8 Unit end questions
3.0 OBJECTIVES
3.1 INTRODUCTION
3.2.3 FILE-SYSTEMS:
3.2.4 COMMUNICATIONS:
3.2.6 ACCOUNTING:
To keep track of which user uses how many and what kinds of
computer resources. This record keeping may be used for
accounting (so that users can be billed) or simply for accumulating
usage statistics.
A GUI:
Usually uses mouse, keyboard, and monitor.
Icons represent files, programs, actions, etc.
Various mouse buttons over objects in the interface cause
various actions (provide information, options, execute
function, open directory (known as a folder)
Invented at Xerox PARC.
Call Call
number name Description
3.4.1.6 PROTECTION
Get File Security, Set File Security
Get Security Group, Set Security Group
Some programs simply ask the system for the date, time,
amount of available memory or disk space, number of users, or
similar status information. Others are more complex, providing
detailed performance, logging, and debugging information.Typically,
these programs format and print the output to the terminal or other
output devices or files or display it in a window of the GUI. Some
systems also support a registry which is used to store and retrieve
configuration information.
3.5.6 COMMUNICATIONS:
Design Goals
Specifying and designing an operating system is a highly
creative task. The first problem in designing a system is to define
goals and specifications. At the highest level, the design of the
system will be affected by the choice of hardware and the type of
system: batch, time shared, single user, multiuser, distributed, real
time, or general purpose. Beyond this highest design level, the
requirements may be much harder to specify. The requirements
can, however, be divided into two basic groups: user goals and
system goals.
Implementation
Once an operating system is designed, it must be
implemented. Traditionally, operating systems have been written in
assembly language. Now, however, they are most commonly
written in higher-level languages such as C or C++. The first
system that was not written in assembly language was probably the
Master Control Program (MCP) for Burroughs computers and it was
written in a variant of ALGOL. MULTICS, developed at MIT, was
written mainly in PL/1. The Linux and Windows XP operating
systems are written mostly in C, although there are some small
sections of assembly code for device drivers and for saving and
restoring the state of registers.
4
OPERATING SYSTEM STRUCTURES
Unit Structure
4.0 Objectives
4.1 Introduction
4.2 Operating system structures
4.2.1 Simple structure
4.2.2 Layered approach
4.2.3 Microkernel approach
4.2.4 Modules
4.3 Operating system generation
4.4 System boot
4.5 Let us sum up
4.6 Unit end questions
4.0 OBJECTIVES
4.1 INTRODUCTION
4.2.4 MODULES:
Scheduling
Classes
Device and
File systems
bus drivers
Core Solaris
Kernel
Loadable
Miscellane- System
ous Calls
STREAMS Executable
Modules Formats
VIRTUAL MACHINES
Unit Structure
5.0 Objectives
5.1 Introduction
5.2 Virtual Machines
5.2.1 History
5.2.2 Benefits
5.2.3 Simulation
5.2.4 Para-virtualization
5.2.5 Implementation
5.2.6 Examples
5.2.6.1 VMware
5.2.6.2 The java virtual machine
5.2.6.3 The .net framework
5.3 Let us sum up
5.4 Unit end questions
5.0 OBJECTIVES
5.1 INTRODUCTION
(a) (b)
5.2.1 History
Virtual machines first appeared as the VM Operating System
for IBM mainframes in 1972.
5.2.2 Benefits
5.2.4 Para-virtualization
Para-virtualization is another variation on the theme, in which
an environment is provided for the guest program that is similar
to its native OS, without trying to completely mimic it.
5.2.6 Examples
5.2.6.1 VMware
Each virtual machine has its own virtual CPU, memory, disk
drives, network interfaces, and so forth. The physical disk the guest
owns and manages is a file within the file system of the host
operating system. To create an identical guest instance, we can
simply copy the file. Copying the file to another location protects the
guest instance against a disaster at the original site. Moving the file
to another location moves the guest system. These scenarios show
how virtualization can improve the efficiency of system
administration as well as system resource use.
Fig 5.2 VMware Architecture
6
PROCESS
Unit Structure
6.0 Objectives
6.1 Introduction
6.2 Process concepts
6.2.1 Process states
6.2.2 Process control block
6.2.3 Threads
6.3 Process scheduling
6.4 Scheduling criteria
6.5 Let us sum up
6.6 Unit end questions
6.0 OBJECTIVES
6.1 INTRODUCTION
Identifier
State
Priority
Program Counter
Memory Pointers
Context data
I/O status information
Accounting
information
•
•
•
Process state:
The state may be new, ready running, waiting, halted, and so on.
Program counter:
The counter indicates the address of the next instruction to be
executed for this process.
CPU registers:
The registers vary in number and type, depending on the computer
architecture. They include accumulators, index registers, stack
pointers, and general-purpose registers, plus any condition-code
information. Along with the program counter, this state information
must be saved when an interrupt occurs, to allow the process to be
continued correctly afterward.
CPU-scheduling information:
This information includes a process priority, pointers to scheduling
queues, and any other scheduling parameters.
Memory-management information:
This information may include such information as the value of the
base and limit registers, the page tables, or the segment tables,
depending on the memory system used by the operating system.
Accounting information:
This information includes the amount of CPU and real time used,
time limits, account numbers, job or process numbers, and so on.
6.2.3 THREADS
All systems
Fairness - giving each process a fair share of the CPU
Policy enforcement - seeing that stated policy is carried out
Balance - keeping all parts of the system busy
Batch systems
Throughput - maximize jobs per hour
Turnaround time - minimize time between submission and
termination
CPU utilization - keep the CPU busy all the time
Interactive systems
Response time - respond to requests quickly
Proportionality - meet users’ expectations
Real-time systems
Meeting deadlines - avoid losing data
Predictability - avoid quality degradation in multimedia systems
The part of the operating system that makes the choice is called the
scheduler, and the algorithm it uses is called the scheduling
algorithm.
1. Define a process.
2. Describe various scheduling criteria.
3. What are threads?
4. What are the components of a Process Control Block?
7
PROCESS SCHEDULING
ALGORITHMS
Unit Structure
7.0 Objectives
7.1 Introduction
7.2 Scheduling Algorithms
7.2.1 First-come, First-served (fcfs) scheduling
7.2.2 Shortest job first scheduling
7.2.3 Priority scheduling
7.2.4 Round robin scheduling
7.2.5 Multilevel queue scheduling
7.2.6 Multilevel feedback queue scheduling
7.3 Let us sum up
7.4 Unit end questions
7.0 OBJECTIVES
7.1 INTRODUCTION
ta = 7.40 seconds
w = 2.22
8 4 4 4 4 4 4
8
A B C D B C D A
(a) (b)
Now let us consider running these four jobs using shortest job
first, as shown in Fig. 7.2 (b). The turnaround times are now 4, 8,
12, and 20 minutes for an average of 11 minutes. Shortest job first
is probably optimal. Consider the case of four jobs, with run times
of A, B, C, and D, respectively. The first job finishes at time a, the
second finishes at time a + b, and so on. The mean turnaround time
is (4a + 3b + 2c + d)/4. It is clear that A contributes more to the
average than the other times, so it should be the shortest job, with
b next, then C, and finally D as the longest as it affects only its own
turnaround time. The same argument applies equally well to any
number of jobs.
It can be seen that processes P2, P3, and P4, which arrive at
around the same time, receive approximately equal weighted
turnarounds. P4 receives the worst weighted turnaround because
through most of its life it is one of three processes present in the
system. P1 receives the best weighted turnaround because no other
process exists in the system during the early part of its execution.
Thus weighted turnarounds depend on the load in the system.
foreground (interactive)
background (batch)
Algorithm choose the process from the occupied queue that has
the highest priority, and run that process either
Preemptive or
Non-preemptive
Each queue has its own scheduling algorithm
foreground – RR
background – FCFS
Possibility I
If each queue has absolute priority over lower-priority queues
then no process in the queue could run unless the queue for the
highest-priority processes were all empty.
For example, in the above figure no process in the batch
queue could run unless the queues for system processes,
interactive processes, and interactive editing processes will all
empty.
Possibility II
If there is a time slice between the queues then each queue
gets a certain amount of CPU times, which it can then schedule
among the processes in its queue. For instance;
80% of the CPU time to foreground queue using RR.
20% of the CPU time to background queue using FCFS.
Example:
Three queues:
1. Q0 – time quantum 8 milliseconds
2. Q1 – time quantum 16 milliseconds
3. Q2 – FCFS
Scheduling:
1. A new job enters queue Q0 which is served FCFS. When
it gains CPU, job receives 8 milliseconds. If it does not finish in
8 milliseconds, job is moved to queue Q1.
2. At Q1 job is again served FCFS and receives 16
additional milliseconds. If it still does not complete, it is
preempted and moved to queue Q2.
1. Define
a. Quantum
b. Aging
2. Give an example of First-Come, First-Served Scheduling.
3. What is the difference between Multilevel Queue
Scheduling and Multilevel Feedback Queue Scheduling?
4. Describe the architecture of MFQ scheduling with the help
of diagrams.
5. State the criteria for internal and external priorities.
8
Unit Structure
8.0 Objectives
8.1 Introduction
8.2 Operations on processes
8.2.1 Process creation
8.2.2 Process termination
8.2.3 Cooperating processes
8.3 Interprocess communication
8.3.1 Message passing system
8.3.1.1 Direct communication
8.3.1.2 Indirect communication
8.3.1.3 Synchronisation
8.3.1.4 Buffering
8.4 Multithreading models
8.4.1 Many-to-one Model
8.4.2 One-to-one Model
8.4.3 Many-to-many model
8.5 Threading Issues
8.5.1 Semantics of fork() and exec() system calls
8.5.2 Thread Cancellation
8.5.3 Signal Handling
8.5.4 Thread pools
8.5.5 Thread specific data
8.5.6 Scheduler activations
8.6 Thread scheduling
8.6.1 Contention scope
8.6.2 Pthread scheduling
8.7 Let us sum up
8.8 Unit end questions
8.0 OBJECTIVES
After reading this unit you will be able to:
Describe the creation & termination of a process
Study various interprocess communications
Introduce the notion of a thread- a fundamental unit of CPU
utilization that forms the basis of multithreaded computer
systems.
Examine issues related to multithreaded programming.
8.1 INTRODUCTION
root
… … …
There are also two possibilities in terms of the address space of the
new process:
1. The child process is a duplicate of the parent process.
2. The child process has a program loaded into it.
int main() {
Pid_t pid;
/* fork another process */
pid = fork();
if (pid < 0) { /* error occurred */
fprintf(stderr, "Fork Failed");
exit(-1);
}
else if (pid == 0) { /* child process */
execlp("/bin/ls", "ls", NULL);
}
else { /* parent process */
/* parent will wait for the child to complete */
wait (NULL);
printf ("Child Complete");
exit(0);
}
}
The parent creates a child process using the fork system call.
We now have two different processes running a copy of the same
program. The value of the pid for the child process is zero; that for
the parent is an integer value greater than zero. The child process
overlays its address space with the UNIX command /bin/ls (used to
get a directory listing) using the execlp system call. The parent
waits for the child process to complete with the wait system call.
When the child process completes, the parent process resumes the
call to wait where it completes using the exit system call.
Normal Exit
Most processes terminates because they have done
their job. This call is exist in UNIX.
Error Exit
When process discovers a fatal error. For example, a
user tries to compile a program that does not exist.
Fatal Error
An error caused by process due to a bug in program for
example, executing an illegal instruction, referring non-
existing memory or dividing by zero.
Information sharing:
Several users may be interested in the same piece of
information (for instance, a shared file), we must provide an
environment to allow concurrent access to these types of
resources.
Computation speedup:
If we want a particular task to run faster, we must break
it into subtasks, each of which will be executing in parallel
with the others. Such a speedup can be achieved only if the
computer has multiple processing elements (such as CPUs
or I/O channels).
Modularity:
To construct the system in a modular fashion, dividing
the system functions into separate processes or threads.
Convenience:
Individual user may have many tasks to work at onetime.
For instance, a user may be editing, printing, and compiling
in parallel.
8.3.1.3 SYNCHRONIZATION
The send and receive system calls are used to communicate
between processes but there are different design options for
implementing these calls. Message passing may be either
blocking or non-blocking - also known as synchronous and
asynchronous.
Blocking send:
The sending process is blocked until the message is received
by the receiving process or by the mailbox.
Non-blocking send:
The sending process sends the message and resumes
operation.
Blocking receive:
The receiver blocks until a message is available.
Non-blocking receive:
The receiver retrieves either a valid message or a null.
8.3.1.4 BUFFERING
During direct or indirect communication, messages exchanged
between communicating processes reside in a temporary queue
which are implemented in the following three ways:
Zero capacity:
The queue has maximum length 0; thus, the link cannot
have any message waiting in it. In this case, the sender must
block until the recipient receives the message. This is referred
to as no buffering.
Bounded capacity:
The queue has finite length n; thus, at most n messages can
reside in it. If the queue is not full when a new message is sent,
the latter is placed in the queue (either the message is copied or
a pointer to the message is kept), and the sender can continue
execution without waiting. If the link is full, the sender must
block until space is available in the queue. This is referred to as
auto buffering
Unbounded capacity:
The queue has potentially infinite length; thus, any number
of messages can wait in it. The sender never blocks. This also
referred to as auto buffering.
2. A thread pool limits the number of threads that exist at any one
point. This is particularly important on systems that cannot
support a large number of concurrent threads.
1. Define :
(a) Thread-specific Data
(b) Multithread programming
(c) Parent Process
2. Discuss different threading issues.
3. What is meant by cooperating processes?
4. Write a short note on Message Passing System.
5. How is a process created?
9
CLIENT-SERVER SYSTEMS
Unit Structure
9.0 Objectives
9.1 Introduction
9.2 Communication in client-server system
9.2.1 Sockets
9.2.2 Remote procedure calls (RPC)
9.2.3 Pipes
9.3 The critical-section problem
9.4 Peterson’s solution
9.5 Semaphores
9.6 Let us sum up
9.7 Unit end questions
9.0 OBJECTIVES
9.1 INTRODUCTION
9.2.1 SOCKETS
A socket is defined as an endpoint for communication. A pair
of processes communicating over a network employ a pair of
sockets-one for each process. A socket is identified by an IP
address concatenated with a port number.
import java.net.*;
import java.io.*;
public class DateServer{}
public static void main(String[] args)
{try {}
}
ServerSocket sock= new ServerSocket(6013);
//now listen for connections
while (true) {}
Socket client= sock.accept();
PrintWriter pout = new
PrintWriter(client.getOutputStream(), true);
//write the Date to the socket
pout.println(new java.util.Date().toString());
//close the socket and resume
//listening for connections
client. close() ;
catch (IOException ioe)
{System.err.println(ioe);
}
import java.net.*;
import java.io.*;
public class DateClient{}
public static void main(String[] args)
{try {}
}
//make connection to server socket
Socket sock= new Socket("127.0.0.1",6013);
InputStream in= sock.getinputStream();
BufferedReader bin = new
BufferedReader(new InputStreamReader(in));
// read the date from the socket
String line;
while ( (line = bin.readLine()) !=null)
System.out.println(line);
// close the socket connection
sock. close() ;
catch (IDException ioe)
{System.err.println(ioe);
}
9.2.3 Pipes
A pipe acts as a conduit allowing two processes to
communicate. Pipes were one of the first IPC mechanisms in early
UNIX systems and typically provide one of the simpler ways for
processes to communicate with one another. Two common types of
pipes used on both UNIX and Windows systems are ordinary pipes
and named pipes.
do {
entry section
critical section
exit section
remainder section
} while(1);
Mutual Exclusion:
If process Pi is executing in its critical section, then no other
process can be executed in their critical sections.
Progress:
If no process is executing in its critical section and some
processes wish to enter their critical sections, then only those
processes that are not executing in their remainder section can
participate in the decision on which will enter its critical section
next, and this selection cannot be postponed indefinitely.
Bounded Waiting:
There exists a bound on the number of times that other
processes are allowed to enter their critical sections after a process
has made a request to enter its critical section and before that
request is granted.
#include “prototype.h”
#define FALSE 0
# define TRUE 1
#define N 2 /* Number of processes */
9.5 SEMAPHORES
Definition:
Concept of operation:
10
MAIN MEMORY
Unit Structure
10.0 Objectives
10.1 Introduction
10.2 Memory Management Without Swapping / Paging
10.2.1 Dynamic Loading
10.2.2 Dynamic Linking
10.2.3 Overlays
10.2.4 Logical Versus Physical Address Space
10.3 Swapping
10.4 Contiguous Memory Allocation
10.4.1 Single-Partition Allocation
10.4.2 Multiple-Partition Allocation
10.4.3 External And Internal Fragmentation
10.5 Paging
10.6 Segmentation
10.7 Let us sum up
10.8 Unit End Questions
10.0 OBJECTIVES
10.1 INTRODUCTION
10.2.3 OVERLAYS
The entire program and data of a process must be in physical
memory for the process to execute. The size of a process is limited
to the size of physical memory. So that a process can be larger
than the amount of memory allocated to it, a technique called
overlays is sometimes used. The idea of overlays is to keep in
memory only those instructions and data that are needed at any
given time. When other instructions are needed, they are loaded
into space that was occupied previously by instructions that are no
longer needed.
Let us consider
Pass1 70K
Pass 2 80K
Symbol table 20K
Common routines 30K
FIG 10.2 OVERLAYS
First-fit:
Allocate the first hole that is big enough. Searching can start
either at the beginning of the set of holes or where the previous
first-fit search ended. We can stop searching as soon as we find a
free hole that is large enough.
Best-fit:
Allocate the smallest hole that is big enough. We must search
the entire list, unless the list is kept ordered by size. This strategy-
produces the smallest leftover hole.
Worst-fit:
Allocate the largest hole. Again, we must search the entire list
unless it is sorted by size. This strategy produces the largest
leftover hole which may be more useful than the smaller leftover
hole from a best-t approach.
10.5 PAGING
10.6 SEGMENTATION
11
VIRTUAL MEMORY
Unit Structure
11.0 Objectives
11.1 Introduction
11.2 Demand Paging
11.3 Page Replacement Algorithms
11.3.1 Fifo Algorithm
11.3.2 Optimal Algorithm
11.3.3 Lru Algorithm
11.3.4 Lru Approximation Algorithms
11.3.5 Page Buffering Algorithm
11.4 Modeling Paging Algorithms
11.4.1 Working-Set Model
11.5 Design Issues for Paging System
11.5.1 Prepaging
11.5.2 Page Size
11.5.3 Program Structure
11.5.4 I/O Interlock
11.6 Let Us Sum Up
11.7 Unit End Questions
11.0 OBJECTIVES
11.1 INTRODUCTION
Page frames
Fig 11.4 FIFO Page-replacement algorithm
11.3.2 Optimal Algorithm
An optimal page-replacement algorithm has the lowest page-
fault rate of all algorithms. An optimal page-replacement algorithm
exists, and has been called OPT or MIN. It will simply replace the
page that will not be used for the longest period of time.
Reference String
Page frames
Fig 11.5 Optimal Algorithm
Reference String
Page frames
fault rate for the OPT algorithm on S is the same as the page-fault
rate for the OPT algorithm on SR.
11.5.1 Prepaging
Prepaging is an attempt to prevent high level of initial
paging. In other words, prepaging is done to reduce the large
number of page faults that occurs at process startup.
Note: Prepage all or some of the pages a process will need, before
they are referenced.
Program 1:
for (j = 0; j <128; j++)
for (i = 0; i < 128; i++)
data[i,j] = 0;
Program 2:
for (i = 0; i < 128; i++)
for (j = 0; j < 128; j++)
data[i,j] = 0;
128 page faults
Fig 11.8 The reason why frames used for 1/0 must be in
memory
12
Unit Structure
12.0 Objectives
12.1 Introduction
12.2 File Concept
12.2.1 File Structure
12.2.2 File Management Systems
12.3 File System Mounting
12.3.1 Allocation Methods
12.3.1.1 Contiguous Allocation
12.3.1.2 Linked Allocation
12.3.1.3 Indexed Allocation
12.4 Free Space Management
12.4.1 Bit Vector
12.4.2 Linked List
12.4.3 Grouping
12.4.4 Counting
12.5 File Sharing
12.5.1 Multiple Users
12.5.2 Remote File Systems
12.5.3 Consistency Semantics
12.6 NFS
12.7 Let Us Sum Up
12.8 Unit End Questions
12.0 OBJECTIVES
12.1 INTRODUCTION
The most important parts of an operating system is the file
system. The file system provides the resource abstractions typically
associated with secondary storage. The file system permits users
to create data collections, called files, with desirable properties,
such as the following:
Long-term existence: Files are stored on disk or other
secondary storage and do not disappear when a user logs
off.
Sharable between processes: Files have names and
can have associated access permissions that permit
controlled sharing.
Structure: Depending on the file system, a file can have an
internal structure that is convenient for particular
applications. In addition, files can be organized into
hierarchical or more complex structure to reflect the
relationships among files.
The basic I/O supervisor is responsible for all file I/O initiation
and termination. At this level control structures are maintained that
deal with device I/O scheduling and file status. The basic 1.0
supervisor selects the device on which file I/O is to be performed,
based on the particular file selected.
Each file has its own index block, which is an array of disk-
block addresses. The ith entry in the index block points to the ith
block of the file. The directory contains the address of the index
block.
When the file is created, all pointers in the index block are set
to nil. When the ith block is first written, a block is obtained: from
the free space manager, and its address- is put in the ith index-
block entry.
Fig 12.3 Indexed allocation of disk space
Allocation supports direct access, without suffering from
external fragmentation because any free block on he disk may
satisfy a request for more space. Indexed allocation does suffer
from wasted space. The pointer overhead of the index block is
generally greater than the pointer overhead of linked allocation.
12.4.3 GROUPING
12.4.4 COUNTING
12.6 NFS
NFS Protocol
NFS protocol provides a set of remote procedure calls
for remote file operations. The procedures support the following
operations:
searching for a file within a directory
reading a set of directory entries
manipulating links and directories
accessing file attributes
reading and writing files
NFS servers are stateless; each request has to provide a full set of
arguments. Modified data must be committed to the server’s disk
before results are returned to the client. The NFS protocol does not
provide concurrency-control mechanisms
13
Unit Structure
13.0 Objectives
13.1 Introduction
13.2 Disk Structure
13.3 Disk Management
13.3.1 Disk Formatting
13.3.2 Boot Block
13.3.3 Bad Blocks
13.4 Swap Space Management
13.4.1 Swap Space Use
13.4.2 Swap Space Location
13.5 Raid Structure
13.5.1 Improvement of Reliability via Redundancy
13.5.2 Improvement in Performance via Parallelism
13.5.3 Raid Levels
13.5.4 Selecting a raid level
13.5.5 Extensions
13.5.6 Problems with raid
13.6 Stable Storage Implementation
13.7 Deadlocks
13.7.1 Deadlock Prevention
13.7.2 Deadlock Avoidance
13.7.3 Deadlock Detection
13.7.4 Deadlock Recovery
13.8 Let Us Sum Up
13.9 Unit End Questions
13.0 OBJECTIVES
After reading this unit you will be able to:
Describe boot blocks and bad blocks
Distinguish between various levels of RAID
Define deadlock
Detect, recover, avoid and prevent deadlocks
13.1 INTRODUCTION
13.5.5 Extensions
RAID concepts have been extended to tape drives (e.g.
striping tapes for faster backups or parity checking tapes for
reliability), and for broadcasting of data.
13.7 DEADLOCKS
Deadlock Characterization
In deadlock, processes never finish executing and system
resources are tied up, preventing other jobs from ever starting.
Necessary Conditions
A deadlock situation can arise if the following four conditions
hold simultaneously in a system:
1. Mutual exclusion: At least one resource must be held in a non-
sharable mode; that is, only one process at a time can use the
resource. If another process requests that resource, the requesting
process must be delayed until the resource has been released.
4. Circular wait: There must exist a set {P0, P1, ..., Pn} of waiting
processes such that P0 is waiting for a resource that is held by P1,
P1 is waiting for a resource that is held by P2, —, Pn-1 is waiting for
a resource that is held by Pn, and Pn is waiting for a resource that is
held by P0.
Resource-Allocation Graph
Deadlocks can be described more precisely in terms of a
directed graph called a system resource-allocation graph. The set
of vertices V is partitioned into two different types of nodes P = {P0,
P1, ..., Pn} the set consisting of all the active processes in thesystem;
and R = {R0, R1, ..., Rn}, the set consisting of all resource types in
the system.
(a) (b)
Fig 13.4 (a) Example of Resource allocation graph
(b) Resource allocation graph with a deadlock
A directed edge from process Pi to resource type Rj, is
denoted by PiRj, it signifies that process Pi requested an instance
of resource type Rj and is currently waiting for that resource. A
directed edge from resource type Rj to process Pi is denoted by
RjPi. It signifies that an instance of resource type Rj has been
allocated to process Pi. A directed edge PiRj is called a request
edge; a directed edge RjPi is called an assignment edge.
1. Mutual Exclusion
The mutual-exclusion condition must hold for non-sharable
resources. For example, a printer cannot be simultaneously shared
by several processes. Sharable resources, on the other hand, do
not require mutually exclusive access, and thus cannot be involved
in a deadlock.
3. No Preemption
If a process that is holding some resources requests another
resource that cannot be immediately allocated to it, then all
resources currently being held are preempted. That is this
resources are implicitly released. The preempted resources are
added to the list of resources for which the process is waiting
process will be restarted only when it can regain its old resources,
as well as the new ones that it is requesting.
4. Circular Wait
Circular-wait condition never holds is to impose a total
ordering of all resource types, and to require that each process
requests resources in an increasing order of enumeration.
1. Safe State
A state is safe if the system allocates resources to each
process (up to its maximum) in some order and still avoid a
deadlock. More formally, a system is in a safe state only if there of
processes < P1, P2, ..., Pn > is in a safe sequence for the current
allocation state if, for each Pi the resources that Pj can still request
can be satisfied by the currently available resources plus the
resources held by all the Pj, with j<i. In this situation, if the
resources that process Pi needs are not immediately available, then
Pi can wait until all Pj have finished. When they have finished, Pi
can obtain all of its needed resources, complete its designatedtask,
return its allocated resources, and terminate. When Pi terminates,
Pi+1 can obtain its needed resources, and so on. If no such
sequence exists, then the system state is said to be unsafe.
Fig 13.5 Safe, Unsafe deadlock state
(a) (b)
Fig 13.6 (a) Resource-Allocation Graph
(b) Unsafe state in Resource-Allocation
Graph
3. Banker's Algorithm
The resource-allocation graph algorithm is not applicable to a
resource-allocation system with multiple instances of each resource
type. The deadlock-avoidance algorithm that we describe next is
applicable to such a system, but is less efficient than the resource-
allocation graph scheme. This algorithm is commonly known as the
banker’s algorithm.
5. Define
a. Low-level formatting
b. Swap Space
c. RAID
d. Mirroring
14
I/O SYSTEMS
Unit Structure
14.0 Objectives
14.1 Introduction
14.2 Application I/O Interface
14.2.1 Block and Character Devices
14.2.2 Network Devices
14.2.3 Clocks and timers
14.2.4 Blocking and non-blocking I/O
14.3 Transforming I/O Requests to Hardware Operations
14.4 Streams
14.5 Performance
14.6 Let Us Sum Up
14.7 Unit End Questions
14.0 OBJECTIVES
14.1 INTRODUCTION
(a) (b)
Fig14.3 Two I/O methods: (a) synchronous and
(b) asynchronous
14.4 STREAMS
The streams mechanism in UNIX provides a bi-directional
pipeline between a user process and a device driver, onto which
additional modules can be added.
The user process interacts with the stream head. The device
driver interacts with the device end. Zero or more stream
modules can be pushed onto the stream, using ioctl(). These
modules may filter and/or modify the data as it passes through the
stream.
14.5 PERFORMANCE
15
Unit Structure
15.0 Objectives
15.1 Introduction
15.2 Principles of protection
15.3 Domain of protection
15.3.1 Domain Structure
15.3.2 An Example: Unix
15.4 Access Matrix
15.5 Access Control
15.6 Capability-Based Systems
15.6.1 An Example: Hydra
15.6.2 An Example: Cambridge Cap System
15.7 Language Based Protection
15.7.1 Compiler-Based Enforcement
15.7.2 Protection in java
15.8 The Security Problem
15.9 System and Network Threats
15.9.1 Worms
15.9.2 Port Scanning
15.9.3 Denial of service
15.10 Implementing Securoty Defenses
15.10.1 Security Policy
15.10.2 Vulnerability Assessment
15.10.3 Intrusion Detection
15.10.4 Virus Protection
15.10.5 Auditing, Accounting, and Logging
15.11 Let Us Sum Up
15.12 Unit End Questions
15.0 OBJECTIVES
After studying this unit, you will be able:
To ensure that each shared resource is used only in accordance
with system policies, which may be set either by system
designers or by system administrators.
To ensure that errant programs cause the minimal amount of
damage possible.
Note that protection systems only provide the mechanisms for
enforcing policies and ensuring reliable systems. It is up to
administrators and users to implement those mechanisms
effectively.
15.1 INTRODUCTION
Typically each user is given their own account, and has only
enough privilege to modify their own files. The root account should
not be used for normal day to day activities - The System
Administrator should also have an ordinary account, and reserve
use of the root account for only those tasks which need the root
privileges
Security
Security provided by the kernel offers better protection than
that provided by a compiler. The security of the compiler-based
enforcement is dependent upon the integrity of the compiler itself,
as well as requiring that files not be modified after they are
compiled. The kernel is in a better position to protect itself from
modification, as well as protecting access to specific files. Where
hardware support of individual memory accesses is available, the
protection is stronger still.
Flexibility
A kernel-based protection system is not as flexible to provide
the specific protection needed by an individual programmer, though
it may provide support which the programmer may make use of.
Compilers are more easily changed and updated when necessary
to change the protection services offered or their implementation.
Efficiency
The most efficient protection mechanism is one supported by
hardware and microcode. Insofar as software based protection is
concerned, compiler-based systems have the advantage that many
checks can be made off-line, at compile time, rather that during
execution.
15.9.1 Worms
A worm is a process that uses the fork / spawn process to
make copies of itself in order to wreak havoc on a system. Worms
consume system resources, often blocking out other, legitimate
processes. Worms that propagate over networks can be especially
problematic, as they can tie up vast amounts of network resources
and bring down large-scale systems.
1. First it would check for accounts for which the account name
and the password were the same, such as "guest", "guest".
2. Then it would try an internal dictionary of 432 favorite password
choices. (I’m sure "password", "pass", and blank passwords
were all on the list.)
3. Finally it would try every word in the standard UNIX on-line
dictionary to try and break into user accounts.
With each new access the worm would check for already
running copies of itself, and 6 out of 7 times if it found one it would
stop. (The seventh was to prevent the worm from being stopped by
fake copies.)
Rich Text Format, RTF, files cannot carry macros, and hence
cannot carry Word macro viruses. Known safe programs (e.g. right
after a fresh install or after a thorough examination) can be digitally
signed, and periodically the files can be re-verified against the
stored digital signatures. (Which should be kept secure, such as on
off-line write-only medium)
"The Cuckoo’s Egg" tells the story of how Cliff Stoll detected
one of the early UNIX break-ins when he noticed anomalies in the
accounting records on a computer system being used by physics
researchers.