BTech Notes and Meterial
BTech Notes and Meterial
Definition
An operating system is a program that acts as an interface between the user and the
computer hardware and controls the execution of all kinds of programs.
Memory Management
Processor Management
Device Management
File Management
Security
Control over system performance
Job accounting
Error detecting aids
Coordination between other software and users
Memory Management
Processor Management
Device Management
A file system is normally organized into directories for easy navigation and usage.
These directories may contain files and other directions.
An Operating System does the following activities for file management −
Keeps track of information, location, uses, status etc. The collective facilities are
often known as file system.
Decides who gets the resources.
Allocates the resources.
De-allocates the resources.
Following are some of the important activities that an Operating System performs −
Security − By means of password and similar other techniques, it prevents
unauthorized access to programs and data.
Control over system performance − Recording delays between request for a
service and response from the system.
Job accounting − Keeping track of time and resources used by various jobs and
users.
Error detecting aids − Production of dumps, traces, error messages, and other
debugging and error detecting aids.
Coordination between other softwares and users − Coordination and
assignment of compilers, interpreters, assemblers and other software to the
various users of the computer systems.
Process Management
The operating system is responsible for managing the processes i.e assigning the
processor to a process at a time. This is known as process scheduling. The different
algorithms used for process scheduling are FCFS (first come first served), SJF (shortest job
first), priority scheduling, round robin scheduling etc.
There are many scheduling queues that are used to handle processes in process
management. When the processes enter the system, they are put into the job queue. The
processes that are ready to execute in the main memory are kept in the ready queue. The
processes that are waiting for the I/O device are kept in the device queue.
Memory Management
Memory management plays an important part in operating system. It deals with
memory and the moving of processes from disk to primary memory for execution and back
again.
The activities performed by the operating system for memory management are −
The operating system assigns memory to the processes as required. This can be
done using best fit, first fit and worst fit algorithms.
All the memory is tracked by the operating system i.e. it nodes what memory parts
are in use by the processes and which are empty.
The operating system deallocated memory from processes as required. This may
happen when a process has been terminated or if it no longer needs the memory.
Device Management
There are many I/O devices handled by the operating system such as mouse,
keyboard, disk drive etc. There are different device drivers that can be connected to the
operating system to handle a specific device. The device controller is an interface between
the device and the device driver. The user applications can access all the I/O devices using
the device drivers, which are device specific codes.
File Management
Files are used to provide a uniform view of data storage by the operating system. All
the files are mapped onto physical devices that are usually non volatile so data is safe in the
case of system failure.
The files can be accessed by the system in two ways i.e. sequential access and direct access
−
Sequential Access
The information in a file is processed in order using sequential access. The files
records are accessed on after another. Most of the file systems such as editors,
compilers etc. use sequential access.
Direct Access
In direct access or relative access, the files can be accessed in random for read and
write operations. The direct access model is based on the disk model of a file, since
it allows random accesses.
Protection and Security
Protection and security requires that computer resources such as CPU, softwares,
memory etc. are protected. This extends to the operating system as well as the data in the
system. This can be done by ensuring integrity, confidentiality and availability in the
operating system. The system must be protect against unauthorized access, viruses, worms
etc.
Threats to Protection and Security
A threat is a program that is malicious in nature and leads to harmful effects for the
system. Some of the common threats that occur in a system are −
Virus
Viruses are generally small snippets of code embedded in a system. They are very
dangerous and can corrupt files, destroy data, crash systems etc. They can also spread
further by replicating themselves as required.
Trojan Horse
A trojan horse can secretly access the login details of a system. Then a malicious
user can use these to enter the system as a harmless being and wreak havoc.
Trap Door
A trap door is a security breach that may be present in a system without the
knowledge of the users. It can be exploited to harm the data or files in a system by
malicious people.
Worm
A worm can destroy a system by using its resources to extreme levels. It can
generate multiple copies which claim all the resources and don't allow any other processes
to access them. A worm can shut down a whole network in this way.
Denial of Service
These type of attacks do not allow the legitimate users to access a system. It
overwhelms the system with requests so it is overwhelmed and cannot work properly for
other user.
Protection and Security Methods
The different methods that may provide protect and security for different computer
systems are −
Authentication
This deals with identifying each user in the system and making sure they are who they
claim to be. The operating system makes sure that all the users are authenticated before
they access the system. The different ways to make sure that the users are authentic are:
Username/ Password
Each user has a distinct username and password combination and they need to
enter it correctly before they can access the system.
Random Numbers
The system can ask for numbers that correspond to alphabets that are pre arranged.
This combination can be changed each time a login is required.
Secret Key
A hardware device can create a secret key related to the user id for login. This key
can change each time.
Program execution
I/O operations
File System manipulation
Communication
Error Detection
Resource Allocation
Protection
Program execution
Operating systems handle many kinds of activities from user programs to system
programs like printer spooler, name servers, file server, etc. Each of these activities is
encapsulated as a process.
A process includes the complete execution context (code to execute, data to
manipulate, registers, OS resources in use). Following are the major activities of an
operating system with respect to program management −
I/O Operation
I/O operation means read or write operation with any file or any specific I/O device.
Operating system provides the access to the required I/O device when required.
Communication
In case of distributed systems which are a collection of processors that do not share
memory, peripheral devices, or a clock, the operating system manages communications
between all the processes. Multiple processes communicate with one another through
communication lines in the network.
The OS handles routing and connection strategies, and the problems of contention and
security. Following are the major activities of an operating system with respect to
communication −
Error handling
Errors can occur anytime and anywhere. An error may occur in CPU, in I/O devices or
in the memory hardware. Following are the major activities of an operating system with
respect to error handling −
Resource Management
Protection
As can be seen from this diagram, the processes execute normally in the user mode until a
system call interrupts this. Then the system call is executed on a priority basis in the kernel
mode. After the execution of the system call, the control returns to the user mode and
execution of user processes can be resumed.
In general, system calls are required in the following situations −
If a file system requires the creation or deletion of files. Reading and writing from
files also require a system call.
Creation and management of new processes.
Network connections also require system calls. This includes sending and receiving
packets.
Access to a hardware devices such as a printer, scanner etc. requires a system call.
CreateProcess() fork()
Process Control ExitProcess() exit()
WaitForSingleObject() wait()
CreateFile() open()
ReadFile() read()
File Management
WriteFile() write()
CloseHandle() close()
SetConsoleMode() ioctl()
Device Management ReadConsole() read()
WriteConsole() write()
GetCurrentProcessID() getpid()
Information Maintenance SetTimer() alarm()
Sleep() sleep()
CreatePipe() pipe()
Communication CreateFileMapping() shmget()
MapViewOfFile() mmap()
open()
The open() system call is used to provide access to a file in a file system. This system
call allocates resources to the file and provides a handle that the process uses to refer to
the file. A file can be opened by multiple processes at the same time or be restricted to one
process. It all depends on the file organisation and file system.
read()
The read() system call is used to access data from a file that is stored in the file
system. The file to read can be identified by its file descriptor and it should be opened using
open() before it can be read. In general, the read() system calls takes three arguments i.e.
the file descriptor, buffer which stores read data and number of bytes to be read from the
file.
write()
The write() system calls writes the data from a user buffer into a device such as a
file. This system call is one of the ways to output data from a program. In general, the write
system calls takes three arguments i.e. file descriptor, pointer to the buffer where data is
stored and number of bytes to write from the buffer.
close()
The close() system call is used to terminate access to a file system. Using this system
call means that the file is no longer required by the program and so the buffers are flushed,
the file metadata is updated and the file resources are de-allocated.
System Programs
Process concepts
A process is basically a program in execution. The execution of a process must
progress in a sequential fashion.
A process is defined as an entity which represents the basic unit of work to be
implemented in the system.To put it in simple terms, we write our computer programs in a
text file and when we execute this program, it becomes a process which performs all the
tasks mentioned in the program.
When a program is loaded into the memory and it becomes a process, it can be
divided into four sections ─ stack, heap, text and data. The following image shows a
simplified layout of a process inside main memory −
S.N Component & Description
.
1 Stack
The process Stack contains the temporary data such as method/function
parameters, return address and local variables.
2 Heap
This is dynamically allocated memory to a process during its run time.
3 Text
This includes the current activity represented by the value of Program
Counter and the contents of the processor's registers.
4 Data
This section contains the global and static variables.
Program
#include<stdio.h>
int main(){
printf("Hello, World! \n");
return0;
}
A computer program is a collection of instructions that performs a specific task
when executed by a computer. When we compare a program with a process, we can
conclude that a process is a dynamic instance of a computer program.
A part of a computer program that performs a well-defined task is known as
an algorithm. A collection of computer programs, libraries and related data are referred
to as a software.
When a process executes, it passes through different states. These stages may differ
in different operating systems, and the names of these states are also not standardized.
In general, a process can have one of the following five states at a time.
1 Start
This is the initial state when a process is first started/created.
2 Ready
The process is waiting to be assigned to a processor. Ready processes are
waiting to have the processor allocated to them by the operating system so
that they can run. Process may come into this state after Start state or while
running it by but interrupted by the scheduler to assign CPU to some other
process.
3 Running
Once the process has been assigned to a processor by the OS scheduler, the
process state is set to running and the processor executes its instructions.
4 Waiting
Process moves into the waiting state if it needs to wait for a resource, such as
waiting for user input, or waiting for a file to become available.
5 Terminated or Exit
Once the process finishes its execution, or it is terminated by the operating
system, it is moved to the terminated state where it waits to be removed from
main memory.
A Process Control Block is a data structure maintained by the Operating System for
every process. The PCB is identified by an integer process ID (PID). A PCB keeps all the
information needed to keep track of a process as listed below in the table −
1 Process State
The current state of the process i.e., whether it is ready, running, waiting, or
whatever.
2 Process privileges
This is required to allow/disallow access to system resources.
3 Process ID
Unique identification for each of the process in the operating system.
4 Pointer
A pointer to parent process.
5 Program Counter
Program Counter is a pointer to the address of the next instruction to be
executed for this process.
6 CPU registers
Various CPU registers where process need to be stored for execution for
running state.
9 Accounting information
This includes the amount of CPU used for process execution, time limits,
execution ID etc.
10 IO status information
This includes a list of I/O devices allocated to the process.
The architecture of a PCB is completely dependent on Operating System and may
contain different information in different operating systems. Here is a simplified diagram
of a PCB −
The PCB is maintained for a process throughout its lifetime, and is deleted once the
process terminates.
Process Scheduling
The process scheduling is the activity of the process manager that handles the
removal of the running process from the CPU and the selection of another process on the
basis of a particular strategy.
Process scheduling is an essential part of a Multiprogramming operating systems.
Such operating systems allow more than one process to be loaded into the executable
memory at a time and the loaded process shares the CPU using time multiplexing.
The OS can use different policies to manage each queue (FIFO, Round Robin,
Priority, etc.). The OS scheduler determines how to move processes between the ready
and run queues which can only have one entry per processor core on the system; in the
above diagram, it has been merged with the CPU.
Two-state process model refers to running and non-running states which are described
below −
1 Running
When a new process is created, it enters into the system as in the running
state.
2 Not Running
Processes that are not running are kept in queue, waiting for their turn to
execute. Each entry in the queue is a pointer to a particular process. Queue is
implemented by using linked list. Use of dispatcher is as follows. When a
process is interrupted, that process is transferred in the waiting queue. If the
process has completed or aborted, the process is discarded. In either case, the
dispatcher then selects a process from the queue to execute.
Schedulers
Schedulers are special system software which handle process scheduling in various
ways. Their main task is to select the jobs to be submitted into the system and to decide
which process to run. Schedulers are of three types −
Long-Term Scheduler
Short-Term Scheduler
Medium-Term Scheduler
Context Switch
A context switch is the mechanism to store and restore the state or context of a CPU
in Process Control block so that a process execution can be resumed from the same point
at a later time. Using this technique, a context switcher enables multiple processes to
share a single CPU. Context switching is an essential part of a multitasking operating
system features.
When the scheduler switches the CPU from executing one process to execute
another, the state from the current running process is stored into the process control
block. After this, the state for the process to run next is loaded from its own PCB and used
to set the PC, registers, etc. At that point, the second process can start executing.
Context switches are computationally intensive since register and memory state must
be saved and restored. To avoid the amount of context switching time, some hardware
systems employ two or more sets of processor registers. When the process is switched,
the following information is stored for later use.
Program Counter
Scheduling information
Base and limit register value
Currently used register
Changed State
I/O State information
Accounting information
Operations on Processes
There are many operations that can be performed on processes. Some of these are
process creation, process preemption, process blocking, and process termination. These
are given in detail as follows −
Process Creation
Processes need to be created in the system for different operations. This can be done by the
following events −
Process Blocking
The process is blocked if it is waiting for some event to occur. This event may be I/O as the
I/O events are executed in the main memory and don't require the processor. After the
event is complete, the process again goes to the ready state.
Process Termination
After the process has completed the execution of its last instruction, it is terminated.
The resources held by a process are released after it is terminated.
A child process can be terminated by its parent process if its task is no longer relevant. The
child process sends its status information to the parent process before it terminates. Also,
when a parent process is terminated, its child processes are terminated as well as the child
processes cannot run if the parent processes are terminated.
Cooperating Process
Cooperating processes are those that can affect or are affected by other processes
running on the system. Cooperating processes may share data with each other.
Reasons for needing cooperating processes
There may be many reasons for the requirement of cooperating processes. Some of these
are given as follows −
Modularity
Modularity involves dividing complicated tasks into smaller subtasks. These
subtasks can completed by different cooperating processes. This leads to faster and
more efficient completion of the required tasks.
Information Sharing
Sharing of information between multiple processes can be accomplished using
cooperating processes. This may include access to the same files. A mechanism is
required so that the processes can access the files in parallel to each other.
Convenience
There are many tasks that a user needs to do such as compiling, printing, editing etc.
It is convenient if these tasks can be managed by cooperating processes.
Computation Speedup
Subtasks of a single task can be performed parallely using cooperating processes.
This increases the computation speedup as the task can be executed faster.
However, this is only possible if the system has multiple processing elements.
Methods of Cooperation
Cooperating processes can coordinate with each other using shared data or messages.
Details about these are given as follows −
Cooperation by Sharing
The cooperating processes can cooperate with each other using shared data such as
memory, variables, files, databases etc. Critical section is used to provide data
integrity and writing is mutually exclusive to prevent inconsistent data.
In the above diagram, Process P1 and P2 can cooperate with each other using
shared data such as memory, variables, files, databases etc.
Cooperation by Communication
The cooperating processes can cooperate with each other using messages. This may
lead to deadlock if each process is waiting for a message from the other to perform a
operation. Starvation is also possible if a process never receives a message.
In the above diagram, Process P1 and P2 can cooperate with each other using messages to
communicate.
Interprocess Communication
Interprocess communication is the mechanism provided by the operating system
that allows processes to communicate with each other. This communication could involve a
process letting another process know that some event has occurred or the transferring of
data from one process to another.
Semaphore
A semaphore is a variable that controls the access to a common resource by multiple
processes. The two types of semaphores are binary semaphores and counting
semaphores.
Mutual Exclusion
Mutual exclusion requires that only one process thread can enter the critical section
at a time. This is useful for synchronization and also prevents race conditions.
Barrier
A barrier does not allow individual processes to proceed until all the processes
reach it. Many parallel languages and collective routines impose barriers.
Spinlock
This is a type of lock. The processes trying to acquire this lock wait in a loop while
checking if the lock is available or not. This is known as busy waiting because the
process is not doing any useful operation even though it is active.
Approaches to Interprocess Communication
The different approaches to implement interprocess communication are given as follows −
Pipe
A pipe is a data channel that is unidirectional. Two pipes can be used to create a
two-way data channel between two processes. This uses standard input and output
methods. Pipes are used in all POSIX systems as well as Windows operating systems.
Socket
The socket is the endpoint for sending or receiving data in a network. This is true for
data sent between processes on the same computer or data sent between different
computers on the same network. Most of the operating systems use sockets for
interprocess communication.
File
A file is a data record that may be stored on a disk or acquired on demand by a file
server. Multiple processes can access a file as required. All operating systems use
files for data storage.
Signal
Signals are useful in interprocess communication in a limited way. They are system
messages that are sent from one process to another. Normally, signals are not used
to transfer data but are used for remote commands between processes.
Shared Memory
Shared memory is the memory that can be simultaneously accessed by multiple
processes. This is done so that the processes can communicate with each other. All
POSIX systems, as well as Windows operating systems use shared memory.
Message Queue
Multiple processes can read and write data to the message queue without being
connected to each other. Messages are stored in the queue until their recipient
retrieves them. Message queues are quite useful for interprocess communication
and are used by most operating systems.
A diagram that demonstrates message queue and shared memory methods of interprocess
communication is as follows −