OS Module I
OS Module I
OPERATING SYSTEM
There are two ways one can interact with operating system:
• By means of Operating System Call in a program
• Directly by means of Operating System Commands
•
OPERATING-SYSTEM STRUCTURE
1.Simple Structure
It consists of two separable parts: the kernel and the system programs.
The kernel is further separated into a series of interfaces and device drivers. Everything
below the system-call interface and above the physical hardware is the kernel. The
kernel provides the file system, CPU scheduling, memory management, and other
operating-system functions through system calls. An enormous amount of functionality
is combined into one level. This monolithic structure was difficult to implement and
maintain.
3. Microkernel
In UNIX operating system, the kernel became large and difficult to manage. An
operating system called Mach that modularized the kernel using the microkernel
approach. This method structures the operating system by removing all nonessential
components from the kernel and implementing them as system and user-level
programs. The result is a smaller kernel. Microkernel provide minimal process and
memory management, in addition it provides communication facility. The architecture of
a typical microkernel is as follows:
4. Hybrid Systems
Some operating systems adopt a single, strictly defined structure. Instead, they combine
different structures, resulting in hybrid systems that resolve performance, security, and
usability issues. The three hybrid systems: the Apple Mac OSX operating system and
the two most prominent mobile operating systems — iOS and Android.
Mac OS X
iOS is a mobile operating system designed by Apple to run its smart phone, the iPhone,
as well as its tablet computer, the iPod. iOS is structured on the Mac OS X operating
system, with added functionality pertinent to mobile devices, but does not directly run
Mac OS X applications. The structure of iOS appears in Figure below. Cocoa Touch is
an API for Objective-C that provides several frameworks for developing applications that
run on iOS devices. The fundamental difference between Cocoa, mentioned earlier, and
Cocoa Touch is that the latter provides support for hardware features unique to mobile
devices, such as touch screens. The media services layer provides services for
graphics, audio, and video.
Android
The Android operating system was designed by the Open Handset Alliance (led
primarily by Google) and was developed for Android smart phones and tablet
computers. Whereas iOS is designed to run on Apple mobile devices and is close-
sourced, Android runs on a variety of mobile platforms and is open-sourced, partly
explaining its rapid rise in popularity.
As the need for better processing raised due to the increase in demand for better
processing speed and efficiency the operating systems have been enhanced with extra
features.
1. Serial Processing
In serial processing the resources of the computer system are dedicated to single
program until its completion. In earlier, computer system was referred to as bare
machines. Programs for the bare machine had to develop manually and the instruction
had to be done manually and convert it into binary code and entered into the system by
means of certain switches. Program should be started by loading the program counter
with address of the first instruction and the result of execution were obtained by
examining the corresponding memory location. If errors were detected the program
instruction had to be changed and again feed into the system for execution.
2. Batch Processing:
The next significant evolution of operating system was the development of another type
of processing known as Batch processing. In this case a batch of programs of similar
type which require the same set of resources and perform the same task for execution
are grouped into a batch and stored into the system using an input storage device.
Once the programs are loaded they are automatically executed by the operating system
in a serial manner. Along with batch, instructions are embedded into the batch which is
operating system commands written in a language known as job control language
(JCL). This instruction instructs the operating system regarding how to execute each job
in a batch.
3. Multiprogramming
Batch operating system dedicates the resources of the computer system to a single
program at a time. During the course of execution of a program it oscillates between two
stages.
I. Computational intensive phase
II. I/O intensive phase
Computational intensive phase is the period during which the program instructions are
executed on the CPU. I/O intensive phase is the period of execution of a program
during which it leaves the CPU in order to perform I/O operation.
In a multiprogramming operating system during the execution of a program when it
leaves to perform an I/O operation, the OS will schedule the next ready program into the
CPU for execution. Again when the previous program arrives the CPU is assigned to
the first and then to the second. A significant performance gains is achieved by
interleaved execution of programs.
Advantages
The characteristic of different operating system varies according to the following factors.
i. Processor scheduling
ii. Memory management
iii. I/o management
iv. File management
Basic Each processor run the tasks Only Master processor run the
in Operating System tasks of Operating System.
A distributed operating system is one that looks to its users like an ordinary centralized
operating system but runs on multiple independent CPUs. The key concept here is
transparency. In other words, the use of multiple processors should be invisible to the
user. In a true distributed system, users are not aware of where their programs are
being run or where their files are residing; they should all be handled automatically and
efficiently by the operating system. Distributed operating systems have many aspects in
common with centralized ones but they also differ in certain ways.
Distributed operating system, for example, often allow programs to run on several
processors at the same time, thus requiring more complex processor scheduling
(scheduling refers to a set of policies and mechanisms built into the operating systems
that controls the order in which the work to be done is completed) algorithms in order to
achieve maximum utilization of CPU’s time.
• Each user normally works on his/her own system; using a different system
requires some kind of remote login, instead of having the operating system
dynamically allocate processes to CPUs.
• Users are typically aware of where each of their files are kept and must move file
from one system to another with explicit file transfer commands instead of having
file placement managed by the operating system. The system has little or no fault
tolerance; if 5% of the personnel computers crash, only 5% of the users is out of
business.
Network operating system offers many capabilities including:
• Allowing users to access the various resources of the network hosts
Don Bosco College, Kottiyam Page 13
• Controlling access so that only users in the proper authorization are allowed to access
particular resources.
• Making the use of remote resources appear to be identical to the use of local
resources
• Providing up-to-the minute network documentation on-line.
OPERATING-SYSTEM SERVICES
i. User interface
Almost all operating systems have a user interface(UI).This interface can take several
forms. One is a command-line interface(CLI), which uses text commands and a
method for entering them (say, a keyboard for typing in commands in a specific format
with specific options). Another is a batch interface, in which commands and directives
to control those commands are entered into files, and those files are executed. Most
commonly, a graphical user interface (GUI) is used. Here, the interface is a window
system with a pointing device to direct I/O, choose from menus, and make selections
and a keyboard to enter text. Some systems provide two or all three of these variations.
v. Communications
There are many circumstances in which one process needs to exchange information
with another process. Such communication may occur between processes that are
executing on the same computer or between processes that are executing on different
computer systems tied together by a computer network. Communications may be
implemented via shared memory, in which two or more processes read and write to a
viii. Accounting
Os keeps track of which users use how much and what kinds of computer resources.
This record keeping may be used for accounting (so that users can be billed) or simply
for accumulating usage statistics. Usage statistics may be a valuable tool for
researchers who wish to reconfigure the system to improve computing services.
PROCESS STATE
As a process executes, it changes state .The state of a process is defined in part by the
current activity of that process. A process may be in one of the following states:
Dormant
The process is not known to the OS.
New
The process is newly created by the OS.
Running
Instructions of the process are executed on the CPU.
Waiting
The process is waiting for some event to occur (such as an I/O completion or reception
of a signal).
Ready
The process has acquired all resources needed for its execution except the processor.
Terminated
The process has finished execution.
Each process is represented in the operating system by a process control block (PCB)
also called a task control block. It contains many pieces of information associated with
a specific process, including these:
Process state
The state may be new, ready, running, waiting, halted, and so on.
Program counter
The counter indicates the address of the next instruction to be executed for this
process.
THREADS
Threads are lightweight processes(LWP), is a basic unitof CPU utilization. It comparises
a thread ID, a program mcounter, a register set, and a stack. They improve
performance by weakening the process abstraction. A process(heavy weight) is one
thread of control executing one program in one address space. A thread may have
multiple threads of control running different parts of a program in one address space.
Because threads expose multitasking to the user (cheaply) they are more powerful, but
more complicated.
1. Responsiveness.
Multithreading an interactive application may allow a program to continue running even
if part of it is blocked or is performing a lengthy operation, thereby increasing
responsiveness to the user.This quality is especially useful in designing user
interfaces.For instance, consider what happens when a user clicks a button that results
4. Scalability
The benefits of multithreading can be even greater in a multiprocessor architecture.
CPU SCHEDULE
Scheduling refers to a set of policies and mechanisms built into the operating system
that govern the order in which the work to be done by a computer system is completed.
A scheduler is an OS module that selects the next job to be admitted into the system
and the next process to run. The primary objective of scheduling is to optimize system
performance in accordance with the criteria deemed most by the system designer.
Scheduling Criteria
Many criteria have been suggested for comparing CPU-scheduling algorithms. Which
characteristics are used for comparison can make a substantial difference in which
algorithm is judged to be best. The criteria include the following:
• Throughput: If the CPU is busy executing processes, then work is being done. One
measure of work is the number of processes that are completed per time unit, called
throughput. For long processes, this rate may be one process per hour; for short
transactions, it may be ten processes per second.
• Turnaround time: From the point of view of a particular process, the important
criterion is how long it takes to execute that process. The interval from the time of
submission of a process to the time of completion is the turnaround time. Turnaround
time is the sum of the periods spent waiting to get into memory, waiting in the ready
queue, executing on the CPU, and doing I/O.
• Waiting time: The CPU-scheduling algorithm does not affect the amount of time
during which a process executes or does I/O. It affects only the amount of time that a
process spends waiting in the ready queue. Waiting time is the sum of the periods spent
waiting in the ready queue.
• Response time: In an interactive system, turnaround time may not be the best
criterion. Often, a process can produce some output fairly early and can continue
computing new results while previous results are being output to the user. Thus,
another measure is the time from the submission of a request until the first response is
produced. This measure, called response time, is the time it takes to start responding,
not the time it takes to output the response. The turnaround time is generally limited by
the speed of the output device.
SCHEDULING ALGORITHMS
CPU scheduling deals with the problem of deciding which of the processes in the ready
queue is to be allocated the CPU. There are many different CPU-scheduling algorithms.
P1 24
P2 3
P3 3
If the processes arrive in the order P1, P2, P3, and are served in FCFS order, we get
the result shown in the following Gantt chart, which is a bar chart that illustrates a
particular schedule, including the start and finish times of each of the participating
processes:
The waiting time is 0 milliseconds for process P1, 24 milliseconds for process P2, and
27 milliseconds for process P3. Thus, the average waiting time is (0 + 24 + 27)/3 = 17
milliseconds. If the processes arrive in the order P2, P3, P1, however, the results will be
as shown in the following Gantt chart:
2. Shortest-Job-First Scheduling
As an example of SJF scheduling, consider the following set of processes, with the
length of the CPU burst given in milliseconds:
Don Bosco College, Kottiyam Page 21
Process Burst Time
P1 6
P2 8
P3 7
P4 3
Using SJF scheduling, we would schedule these processes according to the following
Gantt chart:
The waiting time is 3 milliseconds for process P1, 16 milliseconds for process P2, 9
milliseconds for process P3, and 0 milliseconds for process P4. Thus, the average
waiting time is (3 + 16 + 9 + 0)/4 = 7 milliseconds. By comparison, if we were using the
FCFS scheduling scheme, the average waiting time would be 10.25 milliseconds.
The SJF algorithm can be either preemptive or non preemptive. The choice arises
when a new process arrives at the ready queue while a previous process is still
executing. The next CPU burst of the newly arrived process may be shorter than what is
left of the currently executing process. A preemptive SJF algorithm will preempt the
currently executing process, whereas a non preemptive SJF algorithm will allow the
currently running process to finish its CPU burst. Preemptive SJF scheduling is
sometimes called shortest-remaining-time-first scheduling.
3. Priority Scheduling
P1 10 3
P3 2 4
P4 1 5
P5 5 2
4. Round-Robin Scheduling
The round-robin (RR) scheduling algorithm is designed especially for time- sharing
systems. It is similar to FCFS scheduling, but preemption is added to enable the system
to switch between processes. A small unit of time, called a time quantum or time slice,
is defined. A time quantum is generally from 10 to 100 milliseconds in length. The ready
queue is treated as a circular queue.
The CPU scheduler goes around the ready queue, allocating the CPU to each process
for a time interval of up to 1 time quantum. To implement RR scheduling, we again treat
the ready queue as a FIFO queue of processes. New processes are added to the tail of
the ready queue. The CPU scheduler picks the first process from the ready queue,sets
a timer to interrupt after 1 time quantum, and dispatches the process.
The average waiting time under the RR policy is often long. Consider the following set
of processes that arrive at time0, with the length of the CPU burst given in milliseconds:
P1 24
P3 3
Let’s calculate the average waiting time for this schedule. P1 waits for 6 milliseconds
(10-4), P2 waits for 4milliseconds, and P3 waits for 7milliseconds. Thus, the average
waiting time is 17/3 = 5.66 milliseconds.
Another class of scheduling algorithms has been created for situations in which
processes are easily classified into different groups. For example, a common division is
made between foreground (interactive) processes and background (batch) processes.
These two types of processes have different response-time requirements and so may
have different scheduling needs. In addition, foreground processes may have priority
(externally defined) over background processes. A multilevel queue scheduling
algorithm partitions the ready queue into several separate queues. The processes are
permanently assigned to one queue, generally based on some property of the process,
such as memory size, process priority, or process type. Each queue has its own
scheduling algorithm. For example, separate queues might be used for foreground and
background processes. The foreground queue might be scheduled by an RR algorithm,
while the background queue is scheduled by an FCFS algorithm. In addition, there must
be scheduling among the queues, which is commonly implemented as fixed-priority
preemptive scheduling.
For example, the foreground queue may have absolute priority over the background
queue. Let’s look at an example of a multilevel queue scheduling algorithm with five
queues, listed below in order of priority:
1. System processes
2. Interactive processes
4. Batch processes
5. Student processes
Each queue has absolute priority over lower-priority queues. No process in the batch
queue, for example, could run unless the queues for system processes, interactive
processes, and interactive editing processes were all empty. If an interactive editing
process entered the ready queue while a batch process was running, the batch process
would be preempted. Another possibility is to time-slice among the queues. Here, each
queue gets a certain portion of the CPU time, which it can then schedule among its
various processes. For instance, in the foreground–background queue example, the
foreground queue can be given 80 percent of the CPU time for RR scheduling among
its processes, while the background queue receives 20 percent of the CPU to give to its
processes on an FCFS basis.
Normally, when the multilevel queue scheduling algorithm is used, processes are
permanently assigned to a queue when they enter the system. If there are separate
queues for foreground and background processes, for example, processes do not move
from one queue to the other, since processes do not change their foreground or
background nature. This setup has the advantage of low scheduling over head,but it is
inflexible. The multilevel feedback queue scheduling algorithm, in contrast, allows a
process to move between queues. The idea is to separate processes according to the
characteristics of their CPU bursts. If a process uses too much CPU time, it will be
For example, consider a multilevel feedback queue scheduler with three queues,
numbered from 0 to 2. A process entering the ready queue is put in queue0. A process
in queue0 is given a time quantum of 8 milliseconds. If it does not finish within this time,
it is moved to the tail of queue 1. If queue 0 is empty, the process at the head of queue
1 is given a quantum of 16 milliseconds. If it does not complete, it is preempted and is
put into queue 2. Processes in queue 2 are run on an FCFS basis but are run only when
queues0 and 1 are empty. This scheduling algorithm gives highest priority to any
process with a CPU burst of 8 milliseconds or less.