17CS64 Module1
17CS64 Module1
MODULE-1
INTRODUCTION
An Operating System (OS) is a system software that manages the computer hardware.
o It provides a basis for application programs and acts as an intermediary between the
computer users and the computer hardware.
o The purpose of an OS is to provide an environment in which the user can execute the
program in a convenient & efficient manner.
Hardware: The Hardware consists of memory, CPU, ALU, I/O devices, peripherals
devices & storage devices.
OS: The OS controls & co-ordinates the use of hardware among various application
programs for various users.
Application Program: The application programs includes word processors, spread
sheets, compilers & web browsers which defines the ways in which the resources are
used to solve the problems of the users.
User: Who works/executes the required function.
The following figure shows the abstract view of the components of a computer system
To completely understand the role of operating system two views are considered as below:
User View:
System View:
Storage Structure
Main memory is the large storage media that the CPU can access directly. It forms
an array of memory words. Each word has its own address. Interaction is achieved
through a sequence of load and store instructions to specify memory addresses. The
load instruction moves a word from main memory to an internal register within the
CPU, where as store instruction moves the content of a register to main memory.
It But it is very small and volatile storage device. Computer systems have secondary
storage that provides large nonvolatile storage capacity.
Magnetic disks are the common secondary storage devices. Other storage devices are
cache memory, CD-ROM, magnetic tapes and so on.
I/O Structure
This form of interrupt driven I/O works well for moving small amounts of data but it
is a big overhead when bulk transfer is required. To solve this problem, DMA (direct
memory access) is used. DMA is used for high-speed I/O devices. The device
controller transfers entire block of data to or from buffer storage to main memory
without CPU intervention. Only one interrupt is generated per block, rather than the
one interrupt per byte.
The figure shows the interplay of all components of a computer system.
A system that has one main CPU and is capable of executing a general-purpose
instruction set, including instructions from the user processes.
Some systems also have special purpose processor to perform specific task, or on
mainframes, they come in the form of general purpose processors such as I/O
processor. These special purpose processors have limited instruction set and do not
run user processes. They are managed by the OS by sending information about the
next task and monitor their status.
Ex1: A disk controller microprocessor receives a sequence of requests from the
main CPU and implements its own disk queue and scheduling algorithm. This
relieves the main CPU from disk scheduling.
Ex2: PCs contain a microprocessor in the keyboard to convert the keystrokes into
codes to be sent to the CPU.
These special purpose processors do not convert single processor system into
multiprocessor system.
Multiprocessor Systems
Multiprocessor systems have more than one processor in close communication. Also
known as Tightly Coupled System or Parallel Systems.
They share computer bus, the clock, memory & peripheral devices.
Two processes can run in parallel.
Multi Processor Systems have 3 advantages,
o Increased Throughput: By increasing the number of processors we can get more
work done in less time. Speed up ratio with N processors is not N, but it is less
than N.
o Economy of Scale: As Multiprocessor systems share peripherals, mass storage &
power supplies, they can save more money than multiple single processor
systems. If many programs operate on same data, they will be stored on one disk
and all processors can share them instead of maintaining data on several systems.
o Increased Reliability: If a program is distributed properly on several processors,
then the failure of one processor will not halt the system but it only slows down.
The ability to continue providing service proportional to the level of surviving
hardware is called graceful degradation. Such systems that provide graceful
degradation are fault tolerant. Fault tolerant requires a mechanism to allow failure to
be detected, and diagnosed and corrected.
Multi processor systems are of two types
o Asymmetric Multiprocessing: Each processor is assigned a specific task. It uses
a master slave relationship. A master processor controls the system. The master
processors schedules and allocates work to slave processors.
o Symmetric Multiprocessing (SMP): Each processor performs all tasks within
the OS. SMP means all processors are peers i.e. no master slave relationship
exists between processors. Each processor concurrently runs a copy of OS.
Ex: Solaris. The following figure shows SMP architecture.
The differences between symmetric & asymmetric multiprocessing may result from
either hardware or software. Special hardware can differentiate the multiple processors,
or the software can be written to allow only one master & multiple slaves.
A recent trend in CPU design is to include multiple compute cores on a single chip.
Blade Servers are recent development in which multiple processor boards, I/O boards,
and networking boards are placed in same chassis. Here each processors can boot
independently and run their own OS.
Clustered Systems
The clustered systems have multiple CPUs but they are composed of two or more
individual systems coupled together.
Clustered systems share storage and are closely linked via LAN Network.
Clustering is usually used to provide high availability.
A layer of software cluster runs on the cluster nodes. Each node can monitor one or
more of the others. If the monitored machine fails, the monitoring machine takes
ownership of its storage and restarts the applications that were running on failed
machine.
Clustered systems can be categorized into two groups
1. Asymmetric Clustering.
2. Symmetric clustering.
In asymmetric clustering one machine is in hot standby mode while others are
running the application. The hot standby machine does nothing but it monitors the
active server. If the server fails the hot standby machine becomes the active server.
In symmetric mode two or more hosts are running the application & they monitor
each other. This mode is more efficient since it uses all the available hardware.
Other forms of clusters include parallel clusters and clustering over WAN.
Parallel clusters allow multiple hosts to access the same data on shared storage.
To provide this shared access, system must also supply access control and locking to
ensure that no conflicting operations occur. This function known as distributed lock
manager (DLM) is included.
Clustering provides better reliability than the multiprocessor systems.
Multiprogramming system
Single user cannot keep CPU and I/O devices busy at all times.
Multiprogramming increases CPU utilization by organizing jobs so that CPU always
has one to execute.
The OS has to keep several jobs in memory simultaneously as shown in figure
The OS picks up and starts executing one of the jobs.
Eventually if this job may not need the CPU due to some reason like, an I/O operation
to complete, then in non multiprogrammed system CPU would sit idle.
But in a multiprogrammed system instead of having the CPU idle the OS switches to
the next job in the memory.
Timesharing (multitasking)
Dual-mode operation
control of the computer, it is in kernel mode. The system always switches to user
mode before passing control to user program. This allows protection to OS.
Timer
Timer is used to prevent a program from getting stuck in an infinite loop or not
calling system services and never returning control to the OS.
Timer can be set to interrupt the computer after a specific period.
The period may be fixed or variable. The variable timer is implemented by fixed
rate clock and a counter. Whenever the clock ticks, operating system decrements the
counter. When counter reaches zero it generates an interrupt.
Timer has to be set before scheduling process to regain control or terminate program
that exceeds allotted time.
Process Management
Memory Management
The CPU reads the instruction from main memory during instruction fetch cycle &
during the data-fetch cycle it reads & writes the data.
The main memory is the only storage device in which a CPU is able to address &
access directly.
For a program to be executed, it must be loaded into memory & mapped to absolute
addresses. When the program terminates, all available memory will be returned back.
To improve the utilization of CPU & the response time several programs will be kept
in memory.
Several memory management schemes are available & selection depends on the
Hardware design of the system.
The OS is responsible for the following activities
o Keeping track of which parts of the memory are used & by whom.
o Deciding which process and data to move into and out of memory.
o Allocating & deallocating memory space as needed.
Storage Management
These devices have their own unique characteristics like access speed, capacity,
data transfer rate, and access method (sequential or random).
OS implements the abstract concept of a file by managing mass storage media like
tapes, disks etc.,
A file is a collection of related information defined by its creator. They commonly
represent programs (source and object) and data. Data files may be numeric,
alphabetic or alphanumeric.
Files can be organized into directories to make them easier to use.
The OS is responsible for the following activities,
o Creating & deleting files.
o Creating & deleting directories.
o Supporting primitives for manipulating files & directories.
o Mapping files onto secondary storage.
o Backing up files on stable (non volatile) storage media.
Computer system must provide secondary storage to back up main memory because,
o It is too small to accommodate all data and programs.
o Data held in this memory is lost when power goes off.
Most programs including compilers, assemblers, word processors, editors etc are
stored on the disk until loaded into the memory and then use disk as both source and
destination of processing. Hence proper management of disk storage is very
important.
Caching
In a hierarchical storage structure, the same data may appear in different levels of
storage system. For ex, Suppose that an integer A is to be incremented by 1 is located
in file B which resides on disk, the migration of integer A from Disk to Register is
shown in below
Once the increment to A takes place in the internal registers, the value of A differs in
various storage systems. The value of A becomes same only after the new value of A
is written from the internal register back to the disk.
In multitasking environments, extreme care must be taken to use most recent value,
not matter where it is stored in the storage hierarchy.
The situation becomes more complicated in multiprocessor environment, where
each CPU is associated with local cache. A care must be taken to make sure that an
update to the value of A in one cache is immediately reflected in all other caches.
This situation is called as cache coherency.
The situation becomes even more complex in a distributed environment. Several
copies of the same file can be kept on different computers. Since the various replicas
may be accessed and updated concurrently, some distributed systems ensure that,
when a replica is updated in one place all other replicas are also updated as soon as
possible.
I/O Systems
If a computer system has multiple users and allows the concurrent execution of
multiple processes, then a protection mechanism is required to regulate access to data.
System resources like files, memory segments, CPU etc. are made available to only
those processes which have gained authorization from OS.
Protection is a mechanism for controlling the access of processes or users to the
resources defined by a computer system.
The mechanism must specify the controls to be imposed and what for.
Distributed Systems
Distributed systems depend on networking for their functionality. Network may vary
by the protocols used, distance between nodes (LAN, WAN, MAN, etc) & transport
media.
A network operating system is an OS that provides features such as file sharing
across the network and allows different processes on different computers to exchange
messages.
The advantages of Distributed Systems are,
o Resource sharing
o Higher reliability
o Better price performance ratio
o Shorter response time
o Higher throughput
o Incremental growth
The special purpose computers are those whose functions are more limited and whose objectives
are to deal with limited computation domains. Egs. Real time Embedded Systems, Multimedia
Systems and Handheld Systems.
Embedded computers are found almost everywhere from car engines, robots, alarm
systems, medical imaging systems, industrial control systems, microwave ovens,
weapon systems etc.
This class of computers have very specific task and run an OS with very limited
features. Usually they have limited or no user interface.
two conditions to implement, CPU scheduling must be priority based & dispatch
latency should be small.
Multimedia Systems
Handheld Systems
Computing Environments
Traditional Computing
Consider the “typical office environment”: Few year’s back it consisted of PCs
connected to the network with servers providing file and print service, Remote access
looked tough and portability was achieved through laptop.
Terminals attached to mainframes were common at many companies with even few
remote access and portability option.
The web technologies are stretching the boundaries of traditional computing.
Companies have portals, which provide web access to their internal servers. Network
computers are terminals that understand the web based computing. Hand held PDAs
can also connect to wireless networks to use company’s web portal.
Client-Server Computing
Peer-to-Peer(P2P) Computing
It is another form of a distributed system. Here, clients and servers are not
distinguished from one another.
All nodes within the system are considered as peers. Each can act as a server or a
client depending on who is requesting or providing a service.
The advantage is the removal of bottleneck as the services can be provided by
several nodes that are distributed throughout the network.
To participate in a P2P system a node must first join the network of peers. On joining,
the new node can provide and request for services.
Determining what services are available in the network can be accomplished in one
of two methods,
1. When a node joins a network, it registers its services with a centralized lookup
service on the network. Any node wants service, first contacts the centralized
lookup service to determine which nodes provides the service. Then the
communication takes place between the client and the service provider.
2. A peer which is a client broadcasts a request for service to all nodes in the
network. The nodes that provide the service responds to the requesting peer. A
discovery protocol is used by the peers to discover the services provided by
other peers.
It leads to more access by wider variety of devices other than PCs workstations,
PDAs, and cell phones.
Web computing has increased the emphasis on networking.
Devices that were not previously networked have been wired or wireless nowadays.
The network connectivity is faster through improved network technology and
optimized network implementation code.
Web based computing has given rise to a new category of devices called load
balancers which distribute network connections among a pool of similar servers.
1. User interface: Almost all operating systems have a user interface (UI).This interface
can take several forms.
a. Command-line interface (CLI): uses text commands and a specific
method for entering them.
b. Batch Interface: commands and directives to control are entered into files
and those files are executed.
c. Graphical User Interface (GUI): most common. Interface is a window
system with a pointing device directing the I/O, choose from menus, make
selections along with keyboard to enter text.
2. Program Execution: The OS must able to load the program into memory & run that
program. The program must be able to end its execution either normally or
abnormally.
3. I/O Operation: A running program may require I/O( file or an I/O device). Users
cannot control the I/O devices directly. So the OS must provide a means for
controlling I/O devices.
4. File System manipulation: Program needs to read and write files and directories.
They also need to create and delete files, search for a given file and list file
information. Some programs include permission management to deny access to files
or directories based on file ownership.
7. Resource Allocation: When multiple users logs onto the system or when multiple
jobs are running, resources must be allocated to each of them. The OS manages
different types of OS resources. Some resources may need some special allocation
codes & others may have some general request & release code.
8. Accounting: We need to keep track of which users use how many & what kind of
resources. This record keeping may be used for accounting. This accounting data may
be used for statistics or billing. It can also be used to improve system efficiency.
9. Protection and security: Protection ensures that all the access to the system are
controlled. Security starts with each user having authenticated to the system, usually
by means of a password. External I/O devices must also be protected from invalid
access. In multi process environment it is possible that one process may interface with
the other or with the OS, so protection is required.
There are two fundamental approaches for users to interface with OS,
1. Command Interpreter
2. Graphical User Interface
Command Interpreter(CI)
Some OS include CI in the kernel, and in others like windows-XP and UNIX, it is
treated as a special program that is running when a job is initiated or when a user
first logs on.
On systems with multiple command interpreters to choose from, the interpreters
are known as shells. For ex: On UNIX and Linux systems, there are different
shells a user may choose from including Bourne shell, C shell and Korn shell etc
The main function of the command interpreter is to get and execute the next user-
specified command.
Commands are implemented in two ways:
1. In one approach, the command interpreter itself has the code to execute the
command. Ex. a command to delete a file.
2. This will result in the command interpreter to go to a section of its code that
sets up the parameters and makes the appropriate system call. In this method,
the size of the command interpreter depends on the number of commands that
can be given.
3. Alternative approach used by UNIX is most commands are implemented
through system programs. The command interpreter uses the command to
identify a file to be loaded into memory and executed. Ex. rm file.txt would
make the command interpreter search for a file rm, load that file into memory
and execute it with parameter file.txt.
Dept. of CSE, MITM Page 19
OPERATING SYSTEM (17CS64)
SYSTEM CALLS
o Next, we enter a loop that reads from input file (system call) and writes to the
output file (system call).Write/read operation may fail, which needs another
system call to continue.
o Finally, after the entire file is copied, the program may close both files
(system call), write message to console (system call)), and terminate normally
(system call).
Three general methods are used to pass the parameters to the OS.
1. The simplest approach is to pass the parameters in registers.
2. In some cases there can be more parameters than registers. In these cases the
parameters are stored in a block or table in memory and the address of the block
is passed as a parameter in register. It is shown in below figure. This approach is
used by Linux and Solaris.
3. Parameters can also be placed or pushed onto stack by the program & popped
off the stack by the OS.
Some OS prefer the block or stack methods, because those approaches do not limit the
number or length of parameters being passed.
1. Process control
end, abort
load, execute
A process executing one program may want to load and execute another
program. This feature allows the command interpreter to execute programs as
directed by the user.
The question of where to return the control when the loaded program terminates is
related to the problem of whether the existing program is lost, saved or allowed to
continue execution concurrently with the new program. There is a system call for
this purpose (create or submit process).
When the event has occurred the job should signal the occurrence through the
signal event system call.
get process attributes, set process attributes
wait for time
allocate and free memory
o In MS-DOS
MS-DOS is an example of single tasking system, which has command interpreter
system that is invoked when the computer is started as shown in figure a.
To run a program MS-DOS uses simple method. It does not create a new process
when one process is running.
It loads the program into memory and gives the program as much memory as
possible as shown in figure b.
o In FreeBSD
Free BSD is an example of multitasking system.
In free BSD the command interpreter may continue running while other program
is executed as shown in figure
2. File management
3. Device management
After using the device, it must be released using release system call. These
functions are similar to open & close system calls of files.
Read, write & reposition system calls may be used with devices.
MS-DOS & UNIX merge the I/O devices & the files to form file-device
structure.
4. Information maintenance
Many system calls exist for the purpose of transferring information between the
user program and the operating system.
For example, most systems have a system call to return the current time and date.
Other system calls may return information about the system, such as the number
of current users, the version number of the operating system, the amount of free
memory or disk space, and so on.
The operating system also keeps information about all its processes, and system
calls are used to access this information.
System calls are also used to reset the process information (get process
attributes and set process attributes).
5. Communications
create, delete communication connection
send, receive messages
transfer status information
attach and detach remote devices
The recipient process must give its permission for communication to take place
with an accept connection system call.
The receiving daemons execute a wait for connection call and are awakened
when a connection is made.
The source of the communication, known as the client, and the receiving daemon,
known as a server, exchange messages by using read message and write
message system calls.
The close connection call terminates the communication.
In shared memory model, processes use shared memory create and shared
memory attach system calls to create and gain access to regions of memory
owned by other processes.
The OS tries to prevent one process from accessing another process’s memory, so
several processes have to agree to remove this restriction. Then they exchange
information by reading and writing in the shared areas.
The processes are also responsible for ensuring that they are not writing to the
same location simultaneously.
System Programs
i. File management: These programs create, delete, copy, rename, print, dump,
list, and generally manipulate files and directories.
ii. Status information: Some programs asks the system for the date, time,
amount of available memory or disk space, number of users, or similar status
In addition to system programs, most operating systems are supplied with application
programs that are useful in solving common problems or performing common
operations. Such application programs are word processors, text formatters, spreadsheets,
database systems, compilers, plotting and statistical-analysis packages and games.
Implementation
Once an operating system is designed, it must be implemented.
Traditionally, operating systems have been written in assembly language.
MS-DOS was initially implemented in Intel 8088 and was available on Intel CPUs
only. Master Control Program (MCP) written in ALGOL, MULTICS in PL/I, and
Linux is with C and available on Intel 8086, Motorola 680, SPARC and MIPS
RX000.
Now, they are most commonly written in higher-level languages such as C or C++.
The advantages of using higher-level languages are, the code can be written faster, it
is more compact, and is easier to understand and debug.
The improvements in compiler technology will improve the generated code for the
entire operating system by simple recompilation.
Finally, an operating system is easier to port (to move to some other hardware) if it is
written in a higher-level language.
The only possible disadvantages of implementing an operating system in a higher-
level language are reduced speed and increased storage requirements.
Simple Structures
Simple structure OS are small, simple & limited systems. The structure is not well
defined.
MS-DOS is an example of simple structure OS. MS-DOS layer structure is shown in
below figure
In MS-DOS, the interfaces and levels of functionality are not well separated.
For instance, application programs are able to access the basic I/O routines to write
directly to the display and disk drives.
UNIX is another example for simple structure. Initially it was limited by hardware
functions.
o It consists of two separable parts: the kernel and the system programs.
o The kernel is further separated into series of interfaces & device drivers.
o We can view the traditional UNIX operating system as being layered, as shown
in figure
Everything below the system-call interface and above the physical hardware is the
kernel.
Kernel provides the file system, CPU scheduling, memory management, and other
operating-system functions through system calls.
This monolithic structure was difficult to implement and maintain.
Layered Approach
A system can be made modular in many ways. One method is the layered approach
in which the OS is divided into number of layers, where one layer is built on the top
of another layer.
The bottom layer (layer 0) is hardware and higher layer (layer N) is the user
interface. This layering structure is depicted in below figure 2.8.
A typical operating-system layer say layer M consists of data structures and a set of
routines that can be invoked by higher-level layers. Layer M, in turn can invoke
operations on lower level layers.
The main advantage of layered approach is the simplicity i.e. each layer uses the
services & functions provided by the lower layer. This approach simplifies the
debugging & verification. Once first layer is debugged the correct functionality is
guaranteed while debugging the second layer. If an error is identified then it is a
problem in that layer because the layer below is already debugged.
Each layer tries to hide some data structures, operations & hardware from the higher
level layers.
A problem with layered implementation is that they are less efficient.
Micro Kernels
Modules
The best current methodology for operating-system design involves using object-
oriented programming techniques to create a modular kernel.
Here, the kernel has a set of core components and links in additional services either
during boot time or during run time. Such a strategy uses dynamically loadable
modules and is common in modern implementations of UNIX, such as Solaris, Linux,
and Mac OS X.
For example, the Solaris operating system structure, shown in the figure 2.9, is
organized around a core kernel with seven types of loadable kernel modules:
1.Scheduling classes
2.File systems
3.Loadable system calls
4.Executable formats
5.STREAMS modules
6.Miscellaneous
7.Device and bus drivers
The top layers include application environments and a set of services providing a
graphical interface to applications.
Below these layers is the kernel environment, which consists primarily of the Mach
microkernel and the BSD kernel.
Mach provides memory management, support for remote procedure calls (RPCs) and
interprocess communication facilities, including message passing and thread
scheduling.
The BSD component provides a BSD command line interface, support for networking
and file systems, and an implementation of POSIX APIs, including Pthreads.
Virtual Machines
The fundamental idea behind a virtual machine is to abstract the hardware of a single
computer (the CPU, memory, disk drives, network interface cards, and so on) into
several different execution environments, thereby creating the illusion that each
separate execution environment is running its own private computer.
By using CPU scheduling and virtual-memory techniques, an operating system can
create the illusion that a process has its own processor with its own (virtual) memory.
Each process is provided with a (virtual) copy of the underlying computer as shown
in the below figure
A major difficulty with the virtual machine approach involves disk systems. Suppose
that the physical machine had three disk drives but wanted to support seven virtual
machines. Clearly, it could not allocate a disk drive to each virtual machine, because
the virtual machine software itself will need substantial disk space to provide virtual
memory and spooling. The solution is to provide virtual disks-termed minidisks in
IBM's VM operating system which are identical in all respects except size.
Implementation
Benefits
Examples
1. VMware
VMware architecture
The class loader loads the compiled .class files from both the Java program and the
Java API for execution by the Java interpreter.
After a class is loaded, the verifier checks that the .class file is valid Java bytecode
and does not overflow or underflow the stack. It also ensures that the bytecode does
not perform pointer arithmetic, which could provide illegal memory access.
If the class passes verification, it is run by the Java interpreter.
The JVM also automatically manages memory by performing garbage collection -the
practice of reclaiming memory from objects no longer in use and returning it to the
system.
The JVM may be implemented in software on top of a host operating system, such as
Windows, Linux, or Mac OS X, or as part of a Web browser.
System Boot
The procedure of starting a computer by loading the kernel is known as booting the
system.
Bootstrap program or Bootstrap loader locates the kernel, loads it into main memory
and start its execution.
Bootstrap program is in the form of read only memory (ROM) because the RAM is in
unknown state at a system startup. All forms of ROM are knows as firmware.
For large OS like Windows, Mac OS, the Bootstrap loaders is stored in firmware and the
OS is on disk.
Bootstrap has a bit code to read a single block at a fixed location from disk into the
memory and execute the code from that boot block.
A disk that has a boot partition is called a boot disk or system disk.
Process Concepts
The Process
A process is more than the program code which is also called text section.
It contains program counter which represents the current activity and also the
contents of the processor's registers.
A process also consists of a process stack section which contains temporary data
& data section which contains global variables.
A process may also include a heap, which is memory that is dynamically
allocated during process run time.
The structure of a process in memory is shown in below figure
Process State
As process executes it changes its state and each process may be in one of the
following states:
o New: The process is being created
o Running: Instructions are being executed
o Waiting: The process is waiting for some event to occur
o Ready: The process is waiting to be assigned to a process
o Terminated: The process has finished execution
Only one process can be running on any processor at any instant. Many processes
may be ready and waiting.
The state diagram corresponding to these states is shown below figure
The PCB contains important information about the specific process including,
o Process state: The current state of the process i.e., whether it is ready, running,
waiting, halted and so on.
o Program counter: Indicates the address of the next instruction to be executed
for a process.
o CPU registers: The registers vary in number and type. Along with program
counter this state information should be saved to allow process to be continued
correctly after an interrupt occurs as shown in below figure.
o CPU scheduling information: This information includes a process priority,
pointers to scheduling queues, and any other scheduling parameters.
Threads
Process Scheduling
The process scheduler selects an available process for execution on the CPU.
Scheduling queues
o Ready queue: The processes that are placed in main memory and are ready
and waiting to execute are placed in a list called the ready queue. This is in the
form of linked list, the header contains pointer to the first and final PCB in the
list. Each PCB contains a pointer field that points to next PCB in ready queue.
o Device queue: The list of processes waiting for a particular I/O device is
called device queue. When the CPU is allocated to a process it may execute for
some time and may quit or interrupted or wait for the occurrence of a particular
event like completion of an I/O request, but the I/O may be busy with some
other processes. In this case the process must wait for I/O and it will be placed
in device queue. Each device will have its own queue.
The below figure shows Ready queue and various I/O Device queues
A new process is initially put in the ready queue and it waits there until it is
selected for execution or dispatched. Once the process is assigned CPU and is
executing, the following events can occur,
o It can execute an I/O request and is placed in I/O queue.
o The process can create a sub process & wait for its termination.
o The process may be removed from the CPU as a result of interrupt and can
be put back into ready queue.
Schedulers
Context Switch
When an interrupt occurs, the system needs to save the current context of the
process running on the CPU. The context is represented in the PCB of the
process.
When CPU switches to another process, the system must save the state of the old
process and load the saved state for the new process. A state save of the current
state of the CPU, and then state restore to resume operations is performed.
Context-switch time is overhead and the system does no useful work while
switching. Context-switch times are highly dependent on hardware support.
Context-switch speed varies from machine to machine, depending on the memory
speed, the number of registers that must be copied, and the existence of special
instructions.
Process Operations
The processes in most systems can execute concurrently, and they may be created and
deleted dynamically. Thus, these systems must provide a mechanism for process creation
and termination.
Process Creation
A process may create several new processes by some create-process system call,
during the course of execution.
The creating process is called parent process and the created one is called the
child process. Each of the new process may in turn create other processes,
forming a tree of processes. Processes are identified by unique process identifier
( or pid).
Figure 3.7 shows the process tree for the solaris OS. The process at the top of the
tree is sched process, with pid of 0, and this creates several children processes.
The sched process creates several children processes including pageout and
fsflush. These processes are responsible for managing memory and file systems.
The sched process also creates the init process, which serves as the root parent
process for all user processes. These processes are responsible for managing
memory and file systems.
inetd and dtlogin are two children of init where inetd is responsible for
networking services such as telnet and ftp; dtlogin is the process representing a
user login screen.
When a user logs in, dtlogin creates an X-windows session (Xsession), which in
turns creates the sdt_shel process. Below sdt_shel, a user's command-line shell,
the C-shell or csh is created. In this command line interface, the user can then
invoke various child processes, such as the ls and cat commands.
There is also csh process with pid of 7778 representing a user who has logged
onto the system using telnet. This user has started the Netscape browser (pid of
7785) and the emacs editor (pid of 8105).
A process needs certain resources to accomplish its task. Along with the various
logical and physical resources that a process obtains when it is created,
initialization data may be passed along by the parent process to the child
process.
In UNIX OS, fork() system call creates new process. In windows CreateProcess()
does the job.
Exec() system call is called after a fork() to replace the process memory space
with a new program.
The C program shown below illustrates these system calls.
int main()
{
Pid_t pid;
pid = fork(); /* fork another process */
If there are two different processes running a copy of the same program, the pid
for child is zero and for the parent it is greater than zero. The parent process
waits for the child process to complete with the wait() system call.
When the child process completes, the parent process resumes from the call to
wait(), where it completes using exit() system call. This is shown in below figure
Process Termination
A process terminates when it finishes executing its last statement and asks the
operating system to delete it by using exit() system call.
Process resources are deallocated by the operating system. A process can
terminate another process via TerminateProcess() system call. A Parent may
terminate execution of children processes (abort) for the following reasons.
o Child has exceeded usage of allocated resources.
o Task assigned to child is no longer required.
o If parent is exiting some operating system do not allow child to continue if
its parent terminates.
Some systems does not allow child to exist if its parent has terminated. If process
terminates then all its children must also be terminated, this phenomenon is
referred as cascading termination.
The producer can produce one item while the consumer is consuming another
item. The producer and consumer must be synchronized, so that the consumer
does not try to consume an item that has not yet been produced by the producer.
#define BUFFER_SIZE 10
typedef struct {
……..
……..
}item;
item buffer[BUFFER_SIZE];
int in = 0;
int out = 0;
The shared buffer is implemented as a circular array with two logical pointers in
and out. The variable in points to the next free position in the buffer; out points
to the first full position in the buffer. The buffer is empty when in= =out, the
buffer is full when ((in+ 1)% BUFFER_SIZE) = = out.
The code for the producer process is shown below.
item nextProduced;
while (true)
{
/* produce an item in nextProduced */
while ( ((in + 1) % BUFFER_SIZE) = = out); // do nothing
buffer[in] = nextProduced;
in = (in + 1) % BUFFER_SIZE;
}
item nextConsumed;
while (true)
{
while (in = = out); // do nothing
nextConsumed = buffer[out];
Naming
Processes that want to communicate must have a way to refer to each other.
They can use either direct or indirect communication.
Under Direct Communication processes must name each other explicitly
The send() and receive() primitives are defined as,
o Send (P, message) – send a message to process P.
o Receive (Q, message) – receive a message from process Q.
Properties of communication link in this scheme are,
o Links are established automatically between every pair of processes
that want to communicate.
o A link is associated with exactly two communicating processes.
o Between each pair there exists exactly one link.
This scheme exhibits two types of addressing,
o Symmetry: Both sender and receiver must name the other to
communicate.
o Asymmetry: Only the sender names the recipient, the recipient is not
required to name the sender. The send() and receive() primitives are
defined as,
Send (P, message) – send a message to process P
Receive (id, message) – receive a message from any process; the
variable id is set to the name of the process with which
communication has taken place.
In Indirect Communication messages are sent and received from mailboxes
(also referred to as ports)
A mailbox can be viewed abstractly as an object into which messages can be
placed by processes and from which messages can be removed.
Each mailbox has a unique id and processes can communicate only if they
share a mailbox.
Synchronization
Buffering
1. Explain fundamental difference between i) N/w OS and distributed OS ii) web based and
Embedded computing. . (8) Dec 07/Jan 08
2. What do you mean by cooperating process? Describe its four advantages. (6) Dec 07/Jan 08
3. What are different categories of system programs? Explain. (6) Dec 07/Jan 08
4. Define OS. Discuss its role from different perspectives. (7) Dec 08/Jan 09.
5. List different services of OS. Explain. (6) Dec 08/Jan 09.
6. Explain the concept of virtual machines. Bring out its advantages. (5) Dec 08/Jan 09.
7. Difference between a trap and an interrupt (2) Dec 08/Jan 09.
8. Define an operating system. Discuss its role with user and system view points. (06)
Dec.09/Jan.10
9. Give features of symmetric and asymmetric multiprocessing systems (4) Dec.09/ Jan.10
10. Briefly explain common classes of services provided by various OS for helping use for
ensuring efficient operation of system. (10) Dec.09/ Jan.10
11. Define OS. Explain its two view points (5) Dec 2010
12. What are OS operations? Explain (6) Dec.09/ Jan.10
13. Define Virtual machine. With diagram, explain its working. What are its benefits? (9)
Dec.09/Jan.10
14. Distinguish among following terminologies: Multiprogramming systems, multitasking
systems, multiprocessor systems. (12) Dec.09/ Jan.10
15. What are system calls? With examples explain different categories of system call? (10)
( Dec 2012)
16. List and explain services provided by an OS that are designed to make using computer
system more convenient for the user. (8) (July 2013)
17. Is separation of mechanism and policy desirable while designing an OS.Discuss with an Ex.
(4) (July 2013)
18. With a neat diagram of VM ware architecture explain the concept of virtual machines and the
main advantage of using VM architecture. (8) (July 2013)
19. Differentiate between multiprogramming and multiprocessing (5) (Dec 2014)
20. Explain various functions of OS with respect to process and memory management. (5) (Dec
2014)
21. Explain any two facilities provided for implementing interacting process in programming
language and OS. (5) (Dec 2014)
22. Define an operating system? What is system’s viewpoint of an operating system? Explain
the dual mode operation of an operating system. (8) (Dec 2015)
23. Explain the types of multiprocessor systems and the types of clustering. What are fault
tolerant systems? (6) (Dec 2015)
24. Explain the concept of virtual machines. (6) (Dec 2015)
25. What is an Operating system? Explain its functions and goals.
26. Define the essential properties of the following types of operating systems:
a) Batch
b) Multiprogramming
c) Multitasking
d) Distributed
e) Real time
27. Give the examples of real time system application.
28. Explain the function of memory management.
29. Explain the various operating system services.
30. What are the different types of system calls.
31. Explain different types of system structures.
32. Explain file management and its activity.
33. What is microkernel? Discuss the layers of kernel.
34. Explain the different categories of system calls.
35. What are the three main purposes of an operating system?