Os Unit I
Os Unit I
Operating Systems
Introduction:
A computer system has many resources (hardware and software), which may
be required to complete a task. The commonly required resources are input/output
devices, memory, file storage space, CPU,etc. The operating system acts as a manager
of the above resources and allocates them to specific programs and users, whenever
necessary to perform a particular task. Therefore the operating system is the resource
manager i.e. it can manage the resource of a computer system internally. The
resources are processor, memory, files, and I/O devices.In simple terms, an operating
system is an interface between the computer user and the machine.
It is very important for you that every computer must have an operating
system in order to run other programs. The operating system mainly coordinates the
use of the hardware among the various system programs and application programs for
various users.
An operating system acts similarly like government means an operating
system performs no useful function by itself; though it provides an environment
within which other programs can do useful work. Below we have an abstract view of
the components of the computer system:
Advantages:
1. The overall time taken by the system to execute all the programs will be reduced.
2. The Batch Operating System can be shared between multiple users.
Disadvantages:
1. Manual interventions are required between two batches.
2. The CPU utilization is low because the time taken in loading and unloading of
batches is very high as compared to execution time. Multi-programming Sharing
the processor, when two or more programs reside in memory at the same time, is
referred as multi-programming. Multi-programming assumes a single shared
processor. Multi-programming increases CPU utilization by organizing jobs so that
the CPU always has one to execute.
The following figure shows the memory layout for a multi-programming system.
An OS does the following activities related to multiprogramming.
The operating system keeps several jobs in memory at a time.
This set of jobs is a subset of the jobs kept in the job pool.
The operating system picks and begins to execute one of the jobs in the memory.
Multi-programming operating systems monitor the state of all active programs
and system resources using memory management programs to ensures that the
CPU is never idle, unless there are no jobs to process.
Advantages
High and efficient CPU utilization.
User feels that many programs are allotted CPU almost simultaneously.
Disadvantages
CPU scheduling is required.
To accommodate many jobs in memory, memory management is required.
Advantages:
1. Since equal time quantum is given to each process, so each process gets equal
opportunity to execute.
2. The CPU will be busy in most of the cases and this is good to have case.
Disadvantages:
1. Process having higher priority will not get the chance to be executed first because
the equal opportunity is given to each process.
Personal Computers :
Personal computer operating system provides a good interface to a single user.
Personal computer operating systems are widely used for word processing,
spreadsheets and Internet access.
Personal computer operating system are made only for personal.You can say
that your laptops, computer systems, tablets etc. are your personal computers and the
operating system such as windows 7, windows 10, android, etc. are your personal
computer operating system. And you can use your personal computer operating
system for your personal purposes, for example, to chatting with your friends using
some social media sites, reading some articles from internet, making some projects
through Microsoft PowerPoint or any other, designing your website programming
something, watching some videos and movies, listening to some songs and many
more.
Parallel Processing :
Parallel processing requires multiple processors and all the processor works
simultaneously in the system. Here, the task is divided into subparts and these sub
parts are then distributed among the available processors in the system. Parallel
processing completes the job on the shortest possible time.
All the processors in the parallel processing environment should run on the same
operating system.
All processors here are tightly coupled and are packed in one casing. All the
processors in the system share the common secondary storage like the hard disk. As
this is the first place where the programs are to be placed. There is one more thing that
all the processors in the system share i.e. the user terminal (from where the user
interact with the system). The user need not to be aware of the inner architecture of
the machine. He should feel that he is dealing with the single processor only and his
interaction with the system would be the same as in a single processor.
Advantages
1. It saves time and money as many resources working together will reduce the time
and cut potential costs.
2. It can be impractical to solve larger problems on Serial Computing.
3. It can take advantage of non-local resources when the local resources are finite.
4. Serial Computing ‘wastes’ the potential computing power, thus Parallel Computing
makes better work of the hardware.
Disadvantages
1. It addresses such as communication and synchronization between multiple sub-
tasks and processes which is difficult to achieve.
2. The algorithms must be managed in such a way that they can be handled in a
parallel mechanism.
3. The algorithms or programs must have low coupling and high cohesion. But it’s
difficult to create such programs.
4. More technically skilled and expert programmers can code a parallelism-based
program well.
1) Simple structure
Such operating systems do not have well defined structure and are small,
simple and limited systems. The interfaces and levels of functionality are not well
separated. MS-DOS is an example of such operating system. In MS-DOS application
programs are able to access the basic I/O routines. These types of operating system
cause the entire system to crash if one of the user programs fails.
Diagram of the structure of MS-DOS is shown below.
2) Layered structure
An OS can be broken into pieces and retain much more control on system. In
this structure the OS is broken into number of layers (levels). The bottom layer (layer
0) is the hardware and the topmost layer (layer N) is the user interface. These layers
are so designed that each layer uses the functions of the lower level layers only. This
simplifies the debugging process as if lower level layers are debugged and an error
occurs during debugging then the error must be on that layer only as the lower level
layers have already been debugged.
The main disadvantage of this structure is that at each layer, the data needs to
be modified and passed on which adds overhead to the system. Moreover careful
planning of the layers is necessary as a layer can use only lower level layers. UNIX is
an example of this structure.
Advantages of Layered structure:
Layering makes it easier to enhance the operating system as implementation of a
layer can be changed easily without affecting the other layers.
It is very easy to perform debugging and system verification.
3)Micro-Kernel Structure
4)Monolithic Structure
A monolithic structure is a type of operating system architecture where the
entire operating system is implemented as a single large process in kernel mode.
Essential operating system services, such as process management, memory
management, file systems, and device drivers, are combined into a single code block.
5)Modular Structure
It is considered as the best approach for an OS. It involves designing of a
modular kernel. The kernel has only a set of core components and other services are
added as dynamically loadable modules to the kernel either during runtime or boot
time. It resembles layered structure due to the fact that each kernel has defined and
protected interfaces, but it is more flexible than a layered structure as a module can
call any other module. For example Solaris OS is organized as shown in the figure.
System Components:
An Operating system is an interface between users and the hardware of a
computer system. It is a system software that is viewed as an organized collection of
software consisting of procedures and functions, providing an environment for the
execution of programs. The operating system manages system software and computer
hardware resources. It allows computing resources to be used in an efficient way.
Programs interact with computer hardware with the help of operating system. A user
can interact with the operating system by making system calls or using OS commands.
Process Management
A process is a program in execution. It consists of the followings:
a) Executable program
b) Program data
c) Stack and stack pointer
d) Program counter and other CPU registers
e) Details of opened files
A process can be suspended temporarily and the execution of another process
can be taken up. A suspended process can be restarted later. Before suspending a
process, its details are saved in a table called the process table so that it can be
executed later on. An operating system supports two system calls to manage processes
Create and Kill –
Create a system call used to create a new process.
Kill system call used to delete an existing process.
Files Management
Files are used for long-term storage. Files are used for both input and output.
Every operating system provides a file management service. This file management
service can also be treated as an abstraction as it hides the information about the disks
from the user. The operating system also provides a system call for file management.
The system call for file management includes:
File creation
File deletion
Read and Write operations
Files are stored in a directory. System calls provide to put a file in a directory
or to remove a file from a directory. Files in the system are protected to maintain the
privacy of the user. Below shows the Hierarchical File Structure directory.
Command Interpreter
There are several ways for users to interface with the operating system. One of
the approaches to user interaction with the operating system is through commands.
Command interpreter provides a command-line interface. It allows the user to enter a
command on the command line prompt (cmd). The command interpreter accepts and
executes the commands entered by a user. For example, a shell is a command
interpreter under UNIX. The commands to be executed are implemented in two ways:
System Calls
System calls provide an interface to the services made by an operating system.
The user interacts with the operating system programs through System calls. These
calls are normally made available as library functions in high-level languages such as
C, Java, Python etc. It provides a level of abstraction as the user is not aware of the
implementation or execution of the call made. Details of the operating system is
hidden from the user. Different hardware and software services can be availed
through system calls.
Process Management
Memory Management
File Operations
Input / Output Operations
Network Management
The complexity of networks and services has created modern challenges for IT
professionals and users. Network management is a set of processes and procedures
that help organizations to optimize their computer networks. Mainly, it ensures that
users have the best possible experience while using network applications and services.
Security Management
The security mechanisms in an operating system ensure that authorized
programs have access to resources, and unauthorized programs have no access to
restricted resources. Security management refers to the various processes where the
user changes the file, memory, CPU, and other hardware resources that should have
authorization from the operating system.
The purpose of the I/O system is to hide the details of hardware devices from
the application programmer. An I/O device management component allows highly
efficient resource utilization while minimizing errors and making programming easy
on the entire range of devices available in their systems.
Broadly, the secondary storage area is any space, where data is stored
permanently and the user can retrieve it easily. Your computer’s hard drive is the
primary location for your files and programs. Other spaces, such as CD-ROM/DVD
drives, flash memory cards, and networked devices, also provide secondary storage
for data on the computer. The computer’s main memory (RAM) is a volatile storage
device in which all programs reside, it provides only temporary storage space for
performing tasks. Secondary storage refers to the media devices other than RAM (e.g.
CDs, DVDs, or hard disks) that provide additional space for permanent storing of data
and software programs which is also called non-volatile storage.
1) Program Execution
An operating system must be able to load many kinds of activities into the memory
and to run it. The program must be able to end its execution, either normally or
abnormally.
A process includes the complete execution of the written program or code. There
are some of the activities which are performed by the operating system:
o The operating system Loads program into memory
o It also Executes the program
o It Handles the program’s execution
o It Provides a mechanism for process synchronization
o It Provides a mechanism for process communication.
2) I/O Operations
The communication between the user and devices drivers are managed by the
operating system.
I/O devices are required for any running process. In I/O a file or an I/O devices can
be involved.
I/O operations are the read or write operations which are done with the help of
input-output devices.
Operating system give the access to the I/O devices when it required.
4) Communication
In the computer system, there is a collection of processors which do not share
memory peripherals devices or a clock, the operating system manages communication
between all the processes. Multiple processes can communicate with every process
through communication lines in the network. There are some major activities that are
carried by an operating system with respect to communication.
Two processes may require data to be transferred between the process.
Both the processes can be on one computer or a different computer, but are
connected through a computer network.
5) Error Handling
An error is one part of the system that may cause malfunctioning of the
complete system. The operating system constantly monitors the system for detecting
errors to avoid some situations. This give relives to the user of the worry of getting an
error in the various parts of the system causing all functioning.
The error can occur anytime and anywhere. The error may occur anywhere in
the computer system like in CPU, in I/O devices or in the memory hardware. There
are some activities that are performed by an operating system:
The OS continuously checks for the possible errors.
The OS takes an appropriate action to correct errors and consistent computing.
6) Resource Management
When there are multiple users or multiple jobs running at the same time
resources must be allocated to each of them. There are some major activities that are
performed by an operating system:
The OS manages all kinds of resources using schedulers.
CPU scheduling algorithm is used for better utilization of CPU.
7) Protection
The owners of information stored in a multi-user computer system want to
control its use. When several disjoints processes execute concurrently it should not be
possible for any process to interfere with another process. Every process in the
computer system must be secured and controlled. Operating system can be
implemented with the help of various structures. The structure of the OS depends
mainly on how the various common components of the operating system are
interconnected and melded into the kernel.
System Calls :
System Calls provide an interface to the services made available by an
operating system.
Example of a system call request for writing a simple program to read data from one
file and copy them to another file:
Types of System Calls:
Windows Linux
CreateProcess() fork()
Process Control ExitProcess() exit()
WaitForSingleObject() wait()
CreateFile() open()
File Management ReadFile() read()
WriteFile() write()
CloseHandle() close()
SetConsoleMode() ioctl()
Device Management ReadConsole() read()
WriteConsole() write()
GetCurrentProcessID() getpid()
Information Maintenance SetTimer() alarm()
Sleep() Sleep()
CreatePipe() pipe()
Communication CreateFileMapping() shmget()
MapViewOfFile() mmap()
Process
Definition:
A program under execution is known as process.
The process scheduling is the activity of the process manager that handles the
removal of the running process from the CPU and the selection of another process on
the basis of a particular strategy.
Process scheduling is an essential part of a Multi-programming operating
systems. Such operating systems allow more than one process to be loaded into the
executable memory at a time and the loaded process shares the CPU using time
multiplexing.
Process States:
New: This state represents a newly created process that hasn’t started running yet. It
has not been loaded into the main memory, but its process control block (PCB)
has been created, which holds important information about the process.
Ready: A process in this state is ready to run as soon as the CPU becomes available.
It is waiting for the operating system to give it a chance to execute.
Running: This state means the process is currently being executed by the CPU. Since
we’re assuming there is only one CPU, at any time, only one process can be
in this state.
Waiting: This state means the process cannot continue executing right now. It is
waiting for some event to happen, like the completion of an input/output
operation (for example, reading data from a disk).
Terminated: A process in this state has finished its execution or has been stopped by
the user for some reason. At this point, it is released by the operating
system and removed from memory.
Process Control Block:
The process control stores many data items that are needed for efficient
process management. Some of these data items are explained with the help of the
given diagram −
Process State
This specifies the process state i.e. new, ready, running, waiting or terminated.
Process Number
Program Counter
This contains the address of the next instruction that needs to be executed in
the process.
Registers
This specifies the registers that are used by the process. They may
include accumulators, index registers, stack pointers, general purpose registers etc.
These are the different files that are associated with the process
The memory management information includes the page tables or the segment
tables depending on the memory system used. It also contains the value of the base
registers, limit registers etc.
This information includes the list of I/O devices used by the process, the list
of files etc.
Accounting information
The time limits, account numbers, amount of CPU used, process numbers etc.
are all a part of the PCB accounting information.
The Operating System maintains the following important process scheduling queues :
Job queue − This queue keeps all the processes in the system.
Ready queue − This queue keeps a set of all processes residing in main memory,
ready and waiting to execute. A new process is always put in this queue.
Device queue- The processes which are blocked due to unavailability of an I/O
device constitute this queue.
The OS can use different policies to manage each queue (FIFO, Round Robin,
Priority, etc.). The OS scheduler determines how to move processes between the
ready and run queues which can only have one entry per processor core on the system;
in the above diagram, it has been merged with the CPU.
Schedulers:
Schedulers are special system software which handle process scheduling in
various ways. Their main task is to select the jobs to be submitted into the system and
to decide which process to run.
Schedulers are of three types −
Long-Term Scheduler
Short-Term Scheduler
Medium-Term Scheduler
Context Switch
A context switch is the mechanism to store and restore the state or context of a
CPU in Process Control block so that a process execution can be resumed from the
same point at a later time. Using this technique, a context switcher enables multiple
processes to share a single CPU. Context switching is an essential part of a
multitasking operating system features.
When the scheduler switches the CPU from executing one process to execute
another, the state from the current running process is stored into the process control
block. After this, the state for the process to run next is loaded from its own PCB and
used to set the PC, registers, etc. At that point, the second process can start executing.
Context switches are computationally intensive since register and memory state must
be saved and restored. To avoid the amount of context switching time, some hardware
systems employ two or more sets of processor registers. When the process is switched,
the following information is stored for later use.
Program Counter
Scheduling information
Base and limit register value
Currently used register
Changed State
I/O State information
Accounting information
Cooperating Process:
Cooperating processes are those that can affect or affected by other processes
running on the system.Cooperating processes may share data with each other.
3. Computation Speedup
Cooperating processes can be used to accomplish subtasks of a single task
simultaneously. It improves computation speed by allowing the task to be
accomplished faster. Although, it is only possible if the system contains several
processing elements.
4. Convenience
There are multiple tasks that a user requires to perform, such as printing,
compiling, editing, etc. It is more convenient if these activities may be managed
through cooperating processes.
Concurrent execution of cooperating processes needs systems that enable
processes to communicate and synchronize their actions.
1. Cooperation by sharing
The processes may cooperate by sharing data, including variables, memory, databases,
etc. The critical section provides data integrity, and writing is mutually exclusive to
avoid inconsistent data.
2. Cooperation by Communication
The cooperating processes may cooperate by using messages. If every process
waits for a message from another process to execute a task, it may cause a deadlock.
If a process does not receive any messages, it may cause starvation.
Here, you have seen a diagram that shows cooperation by communication. In this
diagram, Process P1 and P2 may cooperate by using messages to communicate.
Both processes run simultaneously. The customer waits if there is nothing to consume.
There is a producer and a consumer; the producer creates the item and stores
it in a buffer while the consumer consumes it. For example, print software generates
characters that the printer driver consumes. A compiler can generate assembly code,
which an assembler can use. In addition, the assembler may produce object modules
that are used by the loader.
Producer Process
while(true)
{
produce an item &
while(counter = = buffer-size);
buffer[int] = next produced;
in = (in+1) % buffer- size;
counter ++;
}
Consumer Process
While(true)
{
while (counter = = 0);
next consumed = buffer[out];
out= (out+1) % buffer size;
counter--;
}
Where,
The producer uses the in variable to determine the next empty slot in the buffer.
The out variable is used by the consumer to determine where the item is located.
The counter is used by producers and consumers to determine the number of
filled slots in the buffer.
Shared Resources
There are two shared resources:
1.Buffer
2.Counter
Inconsistency occurs when the producer and consumer are not executed on time. If
both the producer and the consumer execute concurrently without any control, the
value of a counter used by both will be incorrect. These processes share the following
variables:
var n;
type item = .....;
var Buffer : array [0,n-1] of item;
In, out:0..n-1;
The variables in and out are both set to 0 by default. The shared buffer contains two
logical pointers, in and out, which are implemented as a circular array. The In
variables point to the buffer's next free position, while the Out variables point to the
buffer's first full position. The buffer is empty when in = out, and it is filled when
in+1 mod n = out.
Operations on Processes:
1) Creating a Process
Creating a new process is a common task in software development. It involves
duplicating an existing process to create a new instance that can execute
independently.
The following steps outline the process creation operation:
Forking:The parent process creates a copy of itself, known as the child process.
Memory Allocation: The child process is allocated its own memory space.
Initialization: The child process is initialized with the necessary resources and
starts executing.
2)Terminating a Process
Terminating a process involves stopping the execution of a running process
and cleaning up its resources. It is important to handle process termination gracefully
to avoid undesired consequences. The following steps outline the process termination
operation:
Signal Handling- The parent process sends a termination signal, such as SIGTERM,
to the child process.
Cleanup- The child process performs any necessary cleanup operations, such as
releasing allocated memory or closing open files.
Exit- The child process exits and returns control to the parent process or the operating
system.
Threads:
A process is divide into number of light weight process, each light weight process is
said to be a Thread.
The Thread has a program counter (Keeps track of which instruction to execute next),
registers (holds its
current working variables), stack (execution History).
Thread States:
1. born State : A thread is just created.
2. ready state : The thread is waiting for CPU.
3. running : System assigns the processor to the thread.
4. sleep : A sleeping thread becomes ready after the designated sleep time expires.
5. dead : The Execution of the thread finished.
Example: Word processor.
Typing, Formatting, Spell check, saving are threads.
4 It takes more time to switch b/w It takes less time to so switch b/w two
two processes. Threads.
5 Communication b/w two processes Communication b/w two threads is
is difficult . Easy.
6 Process can’t share the same Threads can share same memory area.
memory area.
7 System calls are requested to System calls are not required.
communicate each other.
Multithreading:
A process is divided into number of smaller tasks each task is called a Thread.
Number ofThreads with in a
Process execute at a time is called Multithreading.
If a program, is multithreaded, even when some portion of it is blocked, the whole
program isnot blocked.
The rest of the program continues working If multiple CPU’s are available.
Multithreading gives best performance. If we have only a single thread, number of
CPU’s available, No performance benefits achieved.
Process creation is heavy-weight while thread creation is light-weight Can simplify
code, increase,efficiency.
Kernels are generally multithreaded
CODE- Contains instruction
DATA- holds global variable
FILES-opening and closing files
REGISTER- contain information about CPU state
STACK-parameters, local variables, functions