Module 1 Operating System Overview
Module 1 Operating System Overview
Module 1 Operating System Overview
Three objectives
• Access to I/O devices: Each I/O device requires its own peculiar set of instructions or
control signals for operation. The OS provides a uniform interface that hides these details so
that programmers can access such devices using simple reads and writes.
• Controlled access to files: For file access, the OS must reflect a detailed understanding of
not only the nature of the I/O device (disk drive, tape drive) but also the structure of the data
contained in the files on the storage medium. In the case of a system with multiple users, the
OS may provide protection mechanisms to control access to the files.
• System access: For shared or public systems, the OS controls access to the system as a
whole and to specific system resources. The access function must provide protection of
resources and data from unauthorized users and must resolve conflicts for resource
contention.
• Error detection and response: A variety of errors can occur while a computer system is
running. These include internal and external hardware errors, such as a memory error, or a
device failure or malfunction; and various software errors, such as division by zero, attempt
to access forbidden memory location, and inability of the OS to grant the request of an
application. In each case, the OS must provide a response that clears the error condition with
the least impact on running applications. The response may range from ending the program
that caused the error, to retrying the operation, to simply reporting the error to the application.
• Accounting: A good OS will collect usage statistics for various resources and monitor
performance parameters such as response time. On any system, this information is useful in
anticipating the need for future enhancements and in tuning the system to improve
performance. On a multiuser system, the information can be used for billing purposes.
1.2 The Evolution of Operating Systems
Serial Processing
• With the earliest computers, from the late 1940s to the mid-1950s, the programmer
interacted directly with the computer hardware; there was no OS.
• These computers were run from a console consisting of display lights, toggle
switches, some form of input device, and a printer.
• Programs in machine code were loaded via the input device (e.g., a card reader).
• If an error halted the program, the error condition was indicated by the lights. If the
program proceeded to a normal completion, the output appeared on the printer.
These early systems presented two main problems:
• Scheduling: Most installations used a hardcopy sign-up sheet to reserve computer time.
Typically, a user could sign up for a block of time in multiples of a half hour or so. A user
might sign up for an hour and finish in 45 minutes; this would result in wasted computer
processing time. On the other hand, the user might run into problems, not finish in the
allotted time, and be forced to stop before resolving the problem.
• Setup time: A single program, called a job, could involve loading the compiler plus the
high-level language program (source program) into memory, saving the compiled program
(object program) and then loading and linking together the object program and common
functions. Each of these steps could involve mounting
or dismounting tapes or setting up card decks. If an error occurred, the hapless user typically
had to go back to the beginning of the setup sequence.Thus, a considerable amount of time
was spent just in setting up the program to run. This mode of operation could be termed serial
processing, reflecting the fact
that users have access to the computer in series.
Simple Batch Systems
• Early computers were very expensive, and therefore it was important to maximize
processor utilization.The wasted time due to scheduling and setup time was
unacceptable.
• To improve utilization, the concept of a batch operating system was developed.
• The central idea behind the simple batch-processing scheme is the use of a piece of
software known as the monitor.With this type of OS, the user no longer hasdirect
access to the processor. Instead, the user submits the job on cards or tape to a
computer operator, who batches the jobs together sequentially and places the entire
batch on an input device, for use by the monitor.
• Each program is constructed to branch back to the monitor when it completes
processing, at which point the monitor automatically begins loading the next program.
• Monitor point of view: The monitor controls the sequence of events. For this to be so,
much of the monitor must always be in main memory and available for execution (Figure
2.3).That portion is referred to as the resident monitor.
• The rest of the monitor consists of utilities and common functions that are loaded as
subroutines to the user program at the beginning of any job that requires them. The
monitor reads in jobs one at a time from the input device(typically a card reader or
magnetic tape drive).
• As it is read in, the current job is placed in the user program area, and control is
passed to this job.
• When the job is completed, it returns control to the monitor, which immediately reads
in the next job. The results of each job are sent to an output device, such as a printer,
for delivery to the user.
• Processor point of view:
• At a certain point, the processor is executing instructions from the portion of main
memory containing the monitor.
• These instructions cause the next job to be read into another portion of main memory.
• Once a job has been read in, the processor will encounter a branch instruction in the
monitor that instructs the processor to continue execution at the start of the user
program.
• The processor will then execute the instructions in the user program until it
encounters an ending or error condition.
• Either event causes the processor to fetch its next instruction from the monitor
program.
• Thus the phrase “control is passed to a job” simply means that the processor is now
fetching and executing instructions in a user program, and “control is returned to the
monitor” means that the processor is now fetching and executing instructions from
the monitor program.
Desirable Hardware features:
• Memory protection: While the user program is executing, it must not alter the memory
area containing the monitor. If such an attempt is made, the processor hardware should detect
an error and transfer control to the monitor.The monitor would then abort the job, print out an
error message, and load in the next job.
• Timer: A timer is used to prevent a single job from monopolizing the system. The timer is
set at the beginning of each job. If the timer expires, the user program is stopped, and control
returns to the monitor.
• Privileged instructions: Certain machine level instructions are designated privileged and
can be executed only by the monitor. If the processor encounters such an instruction while
executing a user program, an error occurs causing control to be transferred to the monitor.
• Interrupts: Early computer models did not have this capability. This feature gives the OS
more flexibility in relinquishing control to and regaining control from user programs.
• The processor spends a certain amount of time executing, until it reaches an I/O
instruction. It must then wait until that I/O instruction concludes before proceeding.
• This inefficiency is not necessary.We know that there must be enough memory to
hold the OS (resident monitor) and one user program.
• Suppose that there is room for the OS and two user programs.When one job needs to
wait for I/O, the processor can switch to the other job, which is likely not waiting for
I/O (Figure 2.5b).
• Furthermore, we might expand memory to hold three, four, or more programs and
switch among all of them (Figure 2.5c). The approach is known as
multiprogramming, or multitasking. It is the central theme of modern operating
systems.
Time-Sharing Systems
• With the use of multiprogramming, batch processing can be quite efficient. However,
for many jobs, it is desirable to provide a mode in which the user interacts directly
with the computer.
• Indeed, for some jobs, such as transaction processing, an interactive mode is essential.
• Multiprogramming can also be used to handle multiple interactive jobs. This
technique is referred to as time sharing, because processor time is shared among
multiple users.
• In a time-sharing system, multiple users simultaneously access the system through
terminals, with the OS interleaving the execution of each user program in a short burst
or quantum of computation.
• Thus, if there are n users actively requesting service at one time, each user will only
see on the average 1/n of the effective computer capacity, not counting OS overhead.
In a SMP system, the kernel can execute on any processor, and typically each processor does
self-scheduling from the pool of available processes or threads.
The kernel can be constructed as multiple processes or multiple threads, allowing portions of
the kernel to execute in parallel.
The SMP approach complicates the OS. The OS designer must deal with the complexity due
to sharing resources (like data structures) and coordinating actions (like accessing devices)
from multiple parts of the OS executing at the same time.
• Scheduling: Any processor may perform scheduling, which complicates the task of
enforcing a scheduling policy and assuring that corruption of the scheduler data structures is
avoided. If kernel-level multithreading is used, then the opportunity exists to schedule
multiple threads from the same process simultaneously on multiple processors.
• Synchronization: With multiple active processes having potential access to shared
address spaces or shared I/O resources, care must be taken to provide effective
synchronization. Synchronization is a facility that enforces mutual exclusion and event
ordering. A common synchronization mechanism used in multiprocessor operating systems is
locks.
Reliability and fault tolerance: The OS should provide graceful degradation in the face
of processor failure. The scheduler and other portions of the OS must recognize the loss of a
processor and restructure management tables accordingly.
Current multicore vendors offer systems with up to eight cores on a single chip. With each
succeeding processor technology generation, the number of cores and the amount of shared
and dedicated cache memory increases, so that we are now entering the era of “many-core”
systems.
The design challenge for a many-core multicore system is to efficiently harness the multicore
processing power and intelligently manage the substantial on-chip resources efficiently.
A system as large and complex as a modern operating system must be engineered carefully if
it is to function properly and be modified easily. A common approach is to partition the task
into small components, or modules, rather than have one monolithic system.
Simple Structure
Many operating systems do not have well-defined structures. Frequently, such systems
started as small, simple, and limited systems and then grew beyond their original scope.
• MS-DOS is an example of such a system. It was written to provide the most
functionality in the least space, so it was not carefully divided into modules. Figure
2.11 shows its structure.
• In MS-DOS, the interfaces and levels of functionality are not well separated. For
instance, application programs are able to access the basic I/O routines to write
directly to the display and disk drives. Such freedom leaves MS-DOS vulnerable to
errant (or malicious) programs, causing entire system crashes when user programs
fail.
• Of course, MS-DOS was also limited by the hardware of its era. Because the Intel
8088 for which it was written provides no dual mode and no hardware protection, the
designers of MS-DOS had no choice but to leave the base hardware accessible.
➢ Another example of limited structuring is the original UNIX operating system.
• It consists of two separable parts: the kernel and the system programs. The kernel
is further separated into a series of interfaces and device drivers, which have been added and
expanded over the years as UNIX has evolved.
• Everything below the system-call interface and above the physical hardware is the
kernel. The kernel provides the file system, CPU scheduling, memory management,
and other operating-system functions through system calls. An enormous amount of
functionality to be combined into one level.
• This monolithic structure was difficult to implement and maintain.
• It had a distinct performance advantage, however: there is very little overhead in the
system call interface or in communication within the kernel.
Layered Approach
• With proper hardware support, operating systems can be broken into pieces that are
smaller and more appropriate than those allowed by the original MS-DOS and UNIX
systems.
• The operating system can then retain much greater control over the computer and over
the applications that make use of that computer.
• Implementers have more freedom in changing the inner workings of the system and in
creating modular operating systems.
• Under a topdown approach, the overall functionality and features are determined and
are separated into components.
• Information hiding is also important, because it leaves programmers free to
implement the low-level routines as they see fit, provided that the external interface of
the routine stays unchanged and that the routine itself performs the advertised task.
• A system can be made modular in many ways. O0ne method is the layered
approach, in which the operating system is broken into / a nu00m0ber of layers
(levels). The bottom layer (layer 0) is the hardware; the highest (layer N) is the user
interface. This layering structure is depicted in Figure 2.13.
Microkernels
• As UNIX expanded, the kernel became large and difficult to manage. In the mid-
1980s, researchers at Carnegie Mellon University developed an operating system
called Mach that modularized the kernel using the microkernel approach.
• This method structures the operating system by removing all nonessential components
from the kernel and implementing them as system and user-level programs. The result
is a smaller kernel.
• There is little consensus regarding which services should remain in the kernel and
which should be implemented in user space.
• Typically, however, microkernels provide minimal process and memory management,
in addition to a communication facility. Figure 2.14 illustrates the architecture of a
typical microkernel.
• The main function of the microkernel is to provide communication between the
client program and the various services that are also running in user space.
• Communication is provided through message passing For example, if the client
program wishes to access a file, it
must interact with the file server. The client program and service never interact directly.
Rather, they communicate indirectly by exchanging messages with the microkernel.
Benefit of the microkernel approach :
1. It makes extending the operating system easier. All new services are added to user
space and consequently do not require modification of the kernel.
2. When the kernel does have to be modified, the changes tend to be fewer, because the
microkernel is a smaller kernel. The resulting operating system is easier to port from
one hardware design to another.
3. The microkernel also provides more security and reliability, since most services are
running as user—rather than kernel— processes. If a service fails, the rest of the
operating system remains untouched.
Examples: The Mac OS X
QNX, a real-time operating system for embedded systems.
Drawback:
• Unfortunately, the performance of microkernels can suffer due to increased system-
function overhead.
Modules
• This methodology for operating-system design involves using loadable kernel
modules. The kernel has a set of core components and links in additional services via
modules, either at boot time or during run time.
• This type of design is common in modern implementations of UNIX, such as Solaris,
Linux, and Mac OS X, as well as Windows.
• The idea of the design is for the kernel to provide core services while other services
are implemented dynamically, as the kernel is running. Linking services dynamically
is preferable to adding new features directly to the kernel, which would require
recompiling the kernel every time a change was made.
• Thus, for example, we might build CPU scheduling and memory management
algorithms directly into the kernel and then add support for different file systems by
way of loadable modules.
• The overall result resembles a layered system in that each kernel section has defined,
protected interfaces; but it is more flexible than a layered system, because any module
can call any other module.
• The approach is also similar to the microkernel approach in that the primary module
has only core functions and knowledge of how to load and communicate with other
modules; but it
is more efficient because modules do not need to invoke message passing to communicate.
Hybrid Systems
• In practice, very few operating systems adopt a single, strictly defined structure.
Instead, they combine different structures, resulting in hybrid systems that address
performance, security, and usability issues.
• For example, both Linux and Solaris are monolithic, because having the operating
system in a single address space provides very efficient performance. However, they
are also modular, so that new functionality can be dynamically added to the kernel.
Windows is largely monolithic as well (again primarily for performance reasons), but
it retains some behavior typical of microkernel systems, including providing support
for separate subsystems (known asoperating-system personalities) that run as user-
mode processes. Windows systems also provide support for dynamically loadable
kernel modules.
System Calls
Linux Shell.
• Although Linux systems have a graphical user interface, most programmers and
sophisticated users still prefer a command-line interface, called the shell.
• The shell command-line interface is much faster to use.
The bash shell (bash)
• It is heavily based on the original UNIX shell, Bourne shell (written by Steve
Bourne, then at Bell Labs). Its name is an acronym for Bourne Again SHell. Many
other shells are also in use (ksh, csh, etc.), but bash is the default shell in most Linux
systems.
• When the shell starts up, it initializes itself, then types a prompt character, often a
percent or dollar sign, on the screen and waits for the user to type a command line.
• When the user types a command line, the shell extracts the first word from it, where
word here means a run of characters delimited by a space or tab.
• It then assumes this word is the name of a program to be run, searches for this
program, and if it finds it, runs the program.
• The shell then suspends itself until the program terminates,at which time it tries to
read the next command.
• The shell is an ordinary user program. All it needs is the ability to read from the
keyboard and write to the monitor and the power to execute other programs.
• Commands may take arguments, which are passed to the called program as character
strings. For example, the command line
1. cp src dest
invokes the cp program with two arguments, src and dest. This program interprets
the first one to be the name of an existing file. It makes a copy of this file and calls
the copy dest.
2. Not all arguments are file names.
e.g head –20 file
the first argument, –20, tells head to print the first 20 lines of file, instead of the
default number of lines, 10.
• Arguments that control the operation of a command or specify an optional value are
called flags, and by convention are indicated with a dash. The dash is required to
avoid ambiguity, because the command
head 20 file
is perfectly legal, and tells head to first print the initial 10 lines of a file called 20, and
then print the initial 10 lines of a second file called file. Most Linux commands accept
multiple flags and arguments.
• To make it easy to specify multiple file names, the shell accepts magic characters,
sometimes called wild cards. An asterisk, for example, matches all possible strings,
so
ls *.c
tells ls to list all the files whose name ends in .c.
• A program like the shell does not have to open the terminal (keyboard and monitor) in
order to read from it or write to it. Instead, when it (or any other program)starts up, it
automatically has access to a file called standard input (for reading), a file called
standard output (for writing normal output), and a file called standard error (for
writing error messages).
• A program that reads its input from standard input, does some processing on it, and
writes its output to standard output is called a filter.
• It is possible to put a list of shell commands in a file and then start a shell with this
file as standard input. The (second) shell just processes them in order, the same as it
would with commands typed on the keyboard.
• Files containing shell commands are called shell scripts. Shell scripts may assign
values to shell variables and then read them later.
chmod
• In Linux, access to the files is managed through the file permissions, attributes, and
ownership. This ensures that only authorized users and processes can access files and
directories.
• The use of chmod command is to change the access permissions of files and
directories.
The name is an abbreviation of change mode.
• Syntax :
• chmod [reference][operator][mode] file...
• The references are used to distinguish the users to whom the permissions apply i.e.
they are list of letters that specifies whom to give permissions. permissions defines the
permissions for the owner of the file (the "user"), members of the group who owns the
file (the "group"), and anyone else ("others"). There are two ways to represent these
permissions: with symbols (alphanumeric characters), or with octal numbers (the
digits 0 through 7).The references are represented by one or more of the following
letters:
Reference Class Description
Numeric mode
A numeric mode is from one to four octal digits (0-7), derived by adding up the bits with
values 4, 2, and 1. Any omitted digits are assumed to be leading zeros.
EXAMPLES
Read by owner only
$ chmod 400 sample.txt
Read by group only
$ chmod 040 sample.txt
Read by anyone
$ chmod 004 sample.txt
Write by owner only
$ chmod 200 sample.txt
Symbolic mode
The format of a symbolic mode is '[ugoa...][[+-=][rwxXstugo...]...][,...]'. Multiple
symbolic operations can be given, separated by commas.
EXAMPLES
Deny execute permission to everyone.
$ chmod a-x sample.txt
Allow read permission to everyone.
$ chmod a+r sample.txt
Make a file readable and writable by the group and others.
$ chmod go+rw sample.txt
Linux Kernel
System calls: The system call is the means by which a process requests a specific kernel
service. There are several hundred system calls, which can be roughly grouped into six
categories: filesystem, process, scheduling, interprocess communication, socket (networking),
and miscellaneous. All system
calls come here, causing a trap which switches the execution from user mode into
protected kernel mode and passes control to one of the kernel components.
Interrupts and Dispatcher:
The kernel sits directly on the hardware and enables interactions with I/O devices and the
memory management unit and controls CPU access to them.
• Interrupt handlers, are the primary way for interacting with devices, and the low-level
dispatching mechanism.
• This dispatching occurs when an interrupt happens. The low-level code here stops the
running process, saves its state in the kernel process structures, and starts the
appropriate driver.
• Process dispatching also happens when the kernel completes some operations, and it
is time to start up a user process again. The dispatching code is in assembler and is
quite distinct from scheduling.
• To the right in Fig. 10-3 are the other two key components of the Linux kernel.
These are responsible for the memory and process management tasks.
• Memory management tasks include maintaining the virtual to physical-memory
mappings, maintaining a cache of recently accessed pages and implementing a good
page-replacement policy, and on-demand bringing in new pages of needed code and
data into memory.
• The key responsibility of the process-management component is the creation and
termination of processes. It also includes the process scheduler, which chooses which
process or, rather, thread to run next.
• Code for signal handling also belongs to this component.
• While the three components are represented separately in the figure, they are highly
interdependent.
University Questions
DECEMBER 18
Sr.No Question Marks
1 Explain the difference between monolithic kernel and microkernel 5
2 What is operating system? Explain various functions and objectives 10
3 What is system call? Explain any 5 system call in details. 10
25
DECEMBER 19
1 Discuss Operating System as a Resource Manager 5
2 Describe Microkernel with a diagram 5
10
MAY 18
1 Explain the difference between monolithic kernel and microkernel 5
2 What is operating system? Explain various functions and objectives 10
3 What is system call? Explain any 5 system call in details. 10
25
MAY 19
1 Define Operating System. Brief the Functions of OS 5
2 Explain Shell. Explain use of chmod command in linux 5
3 Differentiate between monolithic, layered and microkernel structure of OS 10
4 Write short notes on: System Calls 10
30
References
1. William Stallings, Operating System: Internals and Design Principles, Prentice Hall, 8th
Edition, 2014, ISBN-10: 0133805913 • ISBN-13: 9780133805918 .
2. Abraham Silberschatz, Peter Baer Galvin and Greg Gagne, Operating System Concepts,
John Wiley & Sons , Inc., 9th Edition, 2016, ISBN 978-81-265-5427-0
3. Andrew Tannenbaum, Operating System Design and Implementation, Pearson, 3rd
Edition.