Os Unit-1 Notes
Os Unit-1 Notes
Operating System:
Operating System can be defined as an interface between user and the hardware. It
provides an environment to the user so that, the user can perform its task in
convenient and efficient way.
The Operating System Tutorial is divided into various parts based on its functions
such as Process Management, Process Synchronization, Deadlocks and File
Management.
In the Computer System (comprises of Hardware and software), Hardware can only
understand machine code (in the form of 0 and 1) which doesn't make any sense to a
naive user.
We need a system which can act as an intermediary and manage all the processes and
resources present in the system.
1. Process Management
2. Process Synchronization
3. Memory Management
4. CPU Scheduling
5. File Management
6. Security
It is a specialized software that controls and monitors the execution of all other
programs that reside in the computer, including application programs and other
system software.
Objectives of Operating System
To act as an intermediary between the hardware and its users, making it easier for
the users to access and use other resources.
To keep track of who is using which resource, granting resource requests, and
mediating conflicting requests from different programs and users.
To provide efficient and fair sharing of resources among users and programs.
Memory Management − Keeps track of the primary memory, i.e. what part of it is in use by whom,
what part is not in use, etc. and allocates the memory when a process or program requests it.
Processor Management − Allocates the processor (CPU) to a process and deallocates the processor
when it is no longer required.
Device Management − Keeps track of all the devices. This is also called I/O controller that decides
which process gets the device, when, and for how much time.
File Management − Allocates and de-allocates the resources and decides who gets the resources.
Security − Prevents unauthorized access to programs and data by means of passwords and other similar
techniques.
Job Accounting − Keeps track of time and resources used by various jobs and/or users.
Control Over System Performance − Records delays between the request for a service and from the
system.
Interaction with the Operators − Interaction may take place via the console of the computer in the
form of instructions. The Operating System acknowledges the same, does the corresponding action, and
informs the operation by a display screen.
Error-detecting Aids − Production of dumps, traces, error messages, and other debugging and error-
detecting methods.
Coordination Between Other Software and Users − Coordination and assignment of compilers,
interpreters, assemblers, and other software to the various users of the computer systems.
We want a clear structure to let us apply an operating system to our particular needs because operating
systems have complex structures. It is easier to create an operating system in pieces, much as we break down
larger issues into smaller, more manageable subproblems. Every segment is also a part of the operating
system. Operating system structure can be thought of as the strategy for connecting and incorporating
various
operating system components within the kernel. Operating systems are implemented using many types of
structures, as will be discussed below:
SIMPLE STRUCTURE
It is the most straightforward operating system structure, but it lacks definition and is only appropriate for
usage with tiny and restricted systems. Since the interfaces and degrees of functionality in this structure are
clearly defined, programs are able to access I/O routines, which may result in unauthorized access to I/O
procedures.
o There are four layers that make up the MS-DOS operating system, and each
has its own set of features.
o These layers include ROM BIOS device drivers, MS-DOS device drivers,
application programs, and system programs.
o The MS-DOS operating system benefits from layering because each level can
be defined independently and, when necessary, can interact with one another.
o If the system is built in layers, it will be simpler to design, manage, and update.
Because of this, simple structures can be used to build constrained systems
that are less complex.
o When a user program fails, the operating system as whole crashes.
o Because MS-DOS systems have a low level of abstraction, programs and I/O
procedures are visible to end users, giving them the potential for unwanted
access.
o Because there are only a few interfaces and levels, it is simple to develop.
o Because there are fewer layers between the hardware and the applications, it
offers superior performance.
o The entire operating system breaks if just one user program malfunctions.
o Since the layers are interconnected, and in communication with one another,
there is no abstraction or data hiding.
o The operating system's operations are accessible to layers, which can result in
data tampering and system failure.
LAYERED STRUCTURE
The OS is separated into layers or levels in this kind of arrangement. Layer 0 (the lowest layer) contains
the hardware, and layer 1 (the highest layer) contains the user
interface (layer N). These layers are organized hierarchically, with the top-level layers making use of the
capabilities of the lower-level ones
The functionalities of each layer are separated in this method, and abstraction is also an option.
Because layered structures are hierarchical, debugging is simpler, therefore all lower-level layers
are debugged before the upper layer is examined. As a result, the present layer alone has to be
reviewed since all the lower layers have already been examined.
o Work duties are separated since each layer has its own functionality, and there
is some amount of abstraction.
o Debugging is simpler because the lower layers are examined first, followed by
the top layers.
MICRO-KERNEL STRUCTURE
The operating system is created using a micro-kernel framework that strips the kernel
of any unnecessary parts. Systems and user applications are used to implement these
optional kernel components. So, Micro-Kernels is the name given to these systems
that have been developed.
Each Micro-Kernel is created separately and is kept apart from the others. As a result,
the system is now more trustworthy and secure. If one Micro-Kernel malfunctions, the
remaining operating system is unaffected and continues to function normally.
In the 1970s, Batch processing was very popular. In this technique, similar types of
jobs were batched together and executed in time. People were used to having a single
computer which was called a mainframe.
In Batch operating system, access is given to more than one person; they submit their
respective jobs to the system for the execution.
The system put all of the jobs in a queue on the basis of first come first serve and then
executes the jobs one by one. The users collect their respective output when all the
jobs get executed.
The purpose of this operating system was mainly to transfer control from one job to
another as soon as the job was completed. It contained a small set of programs called
the resident monitor that always resided in one part of the main memory. The
remaining part is used for servicing jobs.
Advantages of Batch OS
o The use of a resident monitor improves computer efficiency as it eliminates
CPU time between two jobs.
Disadvantages of Batch OS
1. Starvation
There are five jobs J1, J2, J3, J4, and J5, present in the batch. If the execution time of
J1 is very high, then the other four jobs will never be executed, or they will have to
wait for a very long time. Hence the other processes get starved.
2. Not Interactive
Batch Processing is not suitable for jobs that are dependent on the user's input. If a job
requires the input of two numbers from the console, then it will never get it in the
batch processing scenario since the user is not present at the time of execution.
In a multiprogramming environment, when a process does its I/O, The CPU can start
the execution of other processes. Therefore, multiprogramming improves the
efficiency of the system.
Advantages of Multiprogramming OS
o Throughout the system, it increased as the CPU always had one program to
execute.
o Response time can also be reduced.
Disadvantages of Multiprogramming OS
o The multiple processors are busier at the same time to complete any task in a
multitasking environment, so the CPU generates more heat.
Network Operating System
o In this type of operating system, the failure of any node in a system affects the
whole system.
o Security and performance are important issues. So trained network
administrators are required for network administration.
In Real-Time Systems, each job carries a certain deadline within which the job is
supposed to be completed, otherwise, the huge loss will be there, or even if the result
is produced, it will be completely useless.
o Easy to layout, develop and execute real-time applications under the real-time
operating system.
o In a Real-time operating system, the maximum utilization of devices and
systems.
In the Time Sharing operating system, computer resources are allocated in a time-
dependent fashion to several programs simultaneously. Thus it helps to provide a
large number of user's direct access to the main computer. It is a logical extension of
multiprogramming. In time-sharing, the CPU is switched among multiple programs
given by different users on a scheduled basis.
A time-sharing operating system allows many users to be served simultaneously, so
sophisticated CPU scheduling schemes and Input/output management are required.
Network operating systems because they also have to take care of varying networking protocols.
Advantages of Distributed Operating System
Distributed System:
Parallel Systems:
Parallel Systems are designed to speed up the execution of programs by dividing the
programs into multiple fragments and processing these fragments at the same time.
Flynn has classified computer systems into four types based on parallelism in the
instructions and in the data streams.
1. Single Instruction stream, single data stream
2. Single Instruction stream, multiple data stream
3. Multiple Instruction stream, single data stream
S.
Parallel System Distributed System
No
3. Tasks are performed with a more Tasks are performed with a less speedy
speedy process. process.
An operating system is a large and complex system that can only be created by
partitioning into small parts. These pieces should be a well-defined part of the system,
carefully defining inputs, outputs, and functions.
The components of an operating system play a key role to make a variety of computer
system parts work together. There are the following components of an operating
system, such as:
1. Process Management
2. File Management
3. Network Management
4. Main Memory Management
5. Secondary Storage Management
6. I/O Device Management
7. Security Management
8. Command Interpreter System
Operating system components help you get the correct computing by detecting CPU
and memory hardware errors.
Process Management
The process management component is a procedure for managing many processes
running simultaneously on the operating system. Every running software application
program has one or more processes associated with them.
For example, when you use a search engine like Chrome, there is a process running
for that browser program.
The execution of a process must be sequential so, at least one instruction should be
executed on behalf of the process.
Here are the following functions of process management in the operating system, such
as:
File Management
The operating system has the following important activities in connection with file
management:
Network Management
Main memory is a large array of storage or bytes, which has an address. The memory
management process is conducted by using a sequence of reads or writes of specific
memory addresses.
It should be mapped to absolute addresses and loaded inside the memory to execute a
program. The selection of a memory management method depends on several factors.
However, it is mainly based on the hardware design of the system. Each algorithm
requires corresponding hardware support. Main memory offers fast storage that can
be accessed directly by the CPU. It is costly and hence has a lower storage capacity.
However, for a program to be executed, it must be in the main memory.
Functions of Memory management
Secondary-Storage Management
Here are some major functions of secondary storage management in the operating
system:
o Storage allocation
o Free space management
o Disk scheduling
One of the important use of an operating system that helps to hide the variations of
specific hardware devices from the user.
The I/O management system offers the following functions, such as:
The various processes in an operating system need to be secured from other activities.
Therefore, various mechanisms can ensure those processes that want to operate files,
memory CPU, and other hardware resources should have proper authorization from
the operating system.
For example, memory addressing hardware helps to confirm that a process can be
executed within its own address space. The time ensures that no process has control of
the CPU without renouncing it. Lastly, no process is allowed to do its own I/O to
protect, which helps you to keep the integrity of the various peripheral devices.
Security can improve reliability by detecting latent errors at the interfaces between
component subsystems. Early detection of interface errors can prevent the foulness of
a healthy subsystem by a malfunctioning subsystem. An unprotected resource cannot
misuse by an unauthorized or incompetent user.
Its function is quite simple, get the next command statement, and execute it. The
command statements deal with process management, I/O handling, secondary storage
management, main memory management, file system access, protection, and
networking.
An Operating System provides services to both the users and to the programs.
Program execution
I/O operations
File System manipulation
Communication
Error Detection
Resource Allocation
Protection
Program execution
Operating systems handle many kinds of activities from user programs to system
programs like printer spooler, name servers, file server, etc. Each of these activities is
encapsulated as a process.
A process includes the complete execution context (code to execute, data to
manipulate, registers, OS resources in use). Following are the major activities of an
operating system with respect to program management −
I/O Operation
An I/O subsystem comprises of I/O devices and their corresponding driver software.
Drivers hide the peculiarities of specific hardware devices from the users.
An Operating System manages the communication between user and device drivers.
I/O operation means read or write operation with any file or any specific I/O
device.
Operating system provides the access to the required I/O device when required.
A file represents a collection of related information. Computers can store files on the
disk (secondary storage), for long-term storage purpose. Examples of storage media
include magnetic tape, magnetic disk and optical disk drives like CD, DVD. Each of
these media has its own properties like speed, capacity, data transfer rate and data
access methods.
A file system is normally organized into directories for easy navigation and usage.
These directories may contain files and other directions. Following are the major
activities of an operating system with respect to file management −
Communication
In case of distributed systems which are a collection of processors that do not share
memory, peripheral devices, or a clock, the operating system manages
communications between all the processes. Multiple processes communicate with one
another through communication lines in the network.
The OS handles routing and connection strategies, and the problems of contention and
security. Following are the major activities of an operating system with respect to
communication −
Error handling
Errors can occur anytime and anywhere. An error may occur in CPU, in I/O devices
or in the memory hardware. Following are the major activities of an operating system
with respect to error handling −
Resource Management
Protection
System calls:
A system call is a method for a computer program to request a service from the kernel
of the operating system on which it is running. A system call is a method of
interacting with the operating system via programs. A system call is a request from
computer software to an operating system's kernel.
The Application Program Interface (API) connects the operating system's functions to
user programs. It acts as a link between the operating system and a process, allowing
user-level programs to request operating system services. The kernel system can only
be accessed using system calls. System calls are required for any programs that use
resources.
The Applications run in an area of memory known as user space. A system call
connects to the operating system's kernel, which executes in kernel space. When an
application creates a system call, it must first obtain permission from the kernel. It
achieves this using an interrupt request, which pauses the current process and
transfers control to the kernel.
If the request is permitted, the kernel performs the requested action, like creating or
deleting a file. As input, the application receives the kernel's output. The application
resumes the procedure after the input is received. When the operation is finished, the
kernel returns the results to the application and then moves data from kernel space to
user space in memory.
A simple system call may take few nanoseconds to provide the result, like retrieving
the system date and time. A more complicated system call, such as connecting to a
network device, may take a few seconds. Most operating systems launch a distinct
kernel thread for each system call to avoid bottlenecks. Modern operating systems are
multi-threaded, which means they can handle various system calls at the same time.
There are commonly five types of system calls. These are as follows:
1. Process Control
2. File Management
3. Device Management
4. Information Maintenance
5. Communication
Process Control
Process control is the system call that is used to direct the processes. Some process
control examples include creating, load, abort, end, execute, process, terminate the
process, etc.
File Management
File management is a system call that is used to handle the files. Some file
management examples include creating files, delete files, open, close, read, write, etc.
Device Management
Device management is a system call that is used to deal with devices. Some examples
of device management include read, device, write, get device attributes, release device,
etc.
Information Maintenance
Communication is a system call that is used for communication. There are some
examples of communication, including create, delete communication connections,
send, receive messages, etc.
There are various examples of Windows and Unix system calls. These are as listed
open()
The open() system call allows you to access a file on a file system. It allocates
resources to the file and provides a handle that the process may refer to. Many
processes can open a file at once or by a single process only. It's all based on the file
system and structure.
read()
It is used to obtain data from a file on the file system. It accepts three arguments in
general:
o A file descriptor.
o A buffer to store read data.
o The number of bytes to read from the file.
The file descriptor of the file to be read could be used to identify it and open it
using open() before reading.
wait()
In some systems, a process may have to wait for another process to complete its
execution before proceeding. When a parent process makes a child process, the parent
process execution is suspended until the child process is finished. The wait() system
call is used to suspend the parent process. Once the child process has completed its
execution, control is returned to the parent process.
write()
It is used to write data from a user buffer to a device like a file. This system call is one
way for a program to generate data. It takes three arguments in general:
o A file descriptor.
o A pointer to the buffer in which data is saved.
o The number of bytes to be written from the buffer.
fork()
Processes generate clones of themselves using the fork() system call. It is one of the
most common ways to create processes in operating systems. When a parent process
spawns a child process, execution of the parent process is interrupted until the child
process completes. Once the child process has completed its execution, control is
returned to the parent process.
close()
It is used to end file system access. When this system call is invoked, it signifies that
the program no longer requires the file, and the buffers are flushed, the file
information is altered, and the file resources are de-allocated as a result.
exec()
The exit() is a system call that is used to end program execution. This call indicates
that the thread execution is complete, which is especially useful in multi-threaded
environments. The operating system reclaims resources spent by the process
following the use of the exit() system function.
Process Management
Process:
1
Stack
The process Stack contains the temporary data such as method/function parameters,
return address and local variables.
2
Heap
This is dynamically allocated memory to a process during its run time.
3
Text
This includes the current activity represented by the value of Program Counter and the
contents of the processor's registers.
4
Data
This section contains the global and static variables.
Program
When a process executes, it passes through different states. These stages may differ in
different operating systems, and the names of these states are also not standardized.
In general, a process can have one of the following five states at a time.
1
Start
This is the initial state when a process is first started/created.
2
Ready
The process is waiting to be assigned to a processor. Ready processes are waiting to have
the processor allocated to them by the operating system so that they can run. Process may
come into this state after Start state or while running it by but interrupted by the scheduler
to assign CPU to some other process.
3
Running
Once the process has been assigned to a processor by the OS scheduler, the process state
is set to running and the processor executes its instructions.
4
Waiting
Process moves into the waiting state if it needs to wait for a resource, such as waiting for
user input, or waiting for a file to become available.
5
Terminated or Exit
Once the process finishes its execution, or it is terminated by the operating system, it is
moved to the terminated state where it waits to be removed from main memory.
Operations on Processes
There are many operations that can be performed on processes. Some of these are
process creation, process preemption, process blocking, and process termination.
These are given in detail as follows −
Process Creation
Processes need to be created in the system for different operations. This can be done
by the following events −
Process Preemption
The process is blocked if it is waiting for some event to occur. This event may be I/O
as the I/O events are executed in the main memory and don't require the processor.
After the event is complete, the process again goes to the ready state.
A diagram that demonstrates process blocking is as follows −
Process Termination
After the process has completed the execution of its last instruction, it is terminated.
The resources held by a process are released after it is terminated.
A child process can be terminated by its parent process if its task is no longer
relevant. The child process sends its status information to the parent process before it
terminates. Also, when a parent process is terminated, its child processes are
terminated as well as the child processes cannot run if the parent processes are
terminated.
Process Scheduling
Definition
The process scheduling is the activity of the process manager that handles the removal
of the running process from the CPU and the selection of another process on the basis
of a particular strategy.
Process scheduling is an essential part of a Multiprogramming operating systems.
Such operating systems allow more than one process to be loaded into the executable
memory at a time and the loaded process shares the CPU using time multiplexing.
Categories of Scheduling
1. Non-preemptive: Here the resource can’t be taken from a process until the process
completes execution. The switching of resources occurs when the running process
terminates and moves to a waiting state.
2. Preemptive: Here the OS allocates the resources to a process for a fixed amount of
time. During resource allocation, the process switches from running state to ready
state or from waiting state to ready state. This switching occurs as the CPU may give
priority to other processes and replace the process with higher priority with the
running process.
The OS maintains all Process Control Blocks (PCBs) in Process Scheduling Queues.
The OS maintains a separate queue for each of the process states and PCBs of all
processes in the same execution state are placed in the same queue. When the state of
a process is changed, its PCB is unlinked from its current queue and moved to its new
state queue.
The Operating System maintains the following important process scheduling queues
− Job queue − This queue keeps all the processes in the system.
Ready queue − This queue keeps a set of all processes residing in main
memory, ready and waiting to execute. A new process is always put in this
queue.
Device queues − The processes which are blocked due to unavailability of an
I/O device constitute this queue.
The OS can use different policies to manage each queue (FIFO, Round Robin,
Priority, etc.). The OS scheduler determines how to move processes between the
ready and run queues which can only have one entry per processor core on the system;
in the above diagram, it has been merged with the CPU.
Two-state process model refers to running and non-running states which are described
below −
1
Running
When a new process is created, it enters into the system as in the running state.
2
Not Running
Processes that are not running are kept in queue, waiting for their turn to execute. Each
entry in the queue is a pointer to a particular process. Queue is implemented by using
linked list. Use of dispatcher is as follows. When a process is interrupted, that process is
transferred in the waiting queue. If the process has completed or aborted, the process is
discarded. In either case, the dispatcher then selects a process from the queue to execute.
Schedulers
Schedulers are special system software which handle process scheduling in various
ways. Their main task is to select the jobs to be submitted into the system and to
decide which process to run. Schedulers are of three types −
Long-Term Scheduler
Short-Term Scheduler
Medium-Term Scheduler
2 Speed is lesser than short term Speed is fastest among Speed is in between both short
scheduler other two and long term scheduler.
3 It controls the degree of It provides lesser control It reduces the degree of
multiprogramming over degree of multiprogramming.
multiprogramming
5 It selects processes from pool It selects those processes It can re-introduce the process
and loads them into memory which are ready to execute into memory and execution can
for execution be continued.
Thread:
A thread is a flow of execution through the process code, with its own program
counter that keeps track of which instruction to execute next, system registers which
hold its current working variables, and a stack which contains the execution history.
A thread shares with its peer threads few information like code segment, data segment
and open files. When one thread alters a code segment memory item, all other threads
see that.
A thread is also called a lightweight process. Threads provide a way to improve
application performance through parallelism. Threads represent a software approach
to improving performance of operating system by reducing the overhead thread is
equivalent to a classical process.
1 Process is heavy weight or resource intensive. Thread is light weight, taking lesser
resources than a process.
2 Process switching needs interaction with operating Thread switching does not need to
system. interact with operating system.
3 In multiple processing environments, each process All threads can share same set of open
executes the same code but has its own memory and files, child processes.
file resources.
4 If one process is blocked, then no other process can While one thread is blocked and
execute until the first process is unblocked. waiting, a second thread in the same
task can run.
5 Multiple processes without using threads use more Multiple threaded processes use fewer
resources. resources.
6 In multiple processes each process operates One thread can read, write or change
independently of the others. another thread's data.
Advantages of Thread
Types of Thread
In this case, the thread management kernel is not aware of the existence of threads.
The thread library contains code for creating and destroying threads, for passing
message and data between threads, for scheduling thread execution and for saving and
restoring thread contexts. The application starts with a single thread.
Advantages
Thread switching does not require Kernel mode privileges.
User level thread can run on any operating system.
Scheduling can be application specific in the user level thread.
User level threads are fast to create and manage.
Disadvantages
In a typical operating system, most system calls are blocking.
Multithreaded application cannot take advantage of multiprocessing.
1 User-level threads are faster to create and manage. Kernel-level threads are slower to
create and manage.
3 User-level thread is generic and can run on any Kernel-level thread is specific to the
operating system. operating system.
Multithreading allows the execution of multiple parts of a program at the same time.
These parts are known as threads and are lightweight processes available within the
process. Therefore, multithreading leads to maximum utilization of the CPU by
multitasking.
The main models for multithreading are one to one model, many to one model and
many to many model. Details about these are given as follows −
The one to one model maps each of the user threads to a kernel thread. This means
that many threads can run in parallel on multiprocessors and other threads can run
when one thread makes a blocking system call.
A disadvantage of the one to one model is that the creation of a user thread requires a
corresponding kernel thread. Since a lot of kernel threads burden the system, there is
restriction on the number of threads in the system.
A diagram that demonstrates the one to one model is given as follows −
The many to one model maps many of the user threads to a single kernel thread. This
model is quite efficient as the user space manages the thread management.
A disadvantage of the many to one model is that a thread blocking system call blocks
the entire process. Also, multiple threads cannot run in parallel as only one thread can
access the kernel at a time.
A diagram that demonstrates the many to one model is given as follows −
Many to Many Model
The many to many model maps many of the user threads to a equal number or lesser
kernel threads. The number of kernel threads depends on the application or machine.
The many to many does not have the disadvantages of the one to one model or the
many to one model. There can be as many user threads as required and their
corresponding kernel threads can run in parallel on a multiprocessor.
A diagram that demonstrates the many to many model is given as follows –