OS Unit-1 Notes
OS Unit-1 Notes
An operating system (OS) is a program that acts as an interface between the system hardware and
the user. Moreover, it handles all the interactions between the software and the hardware. All the
working of a computer system depends on the OS at the base level. Further, it performs all the
functions like handling memory, processes, the interaction between hardware and software, etc.
Now, let us look at the functions of operating system.
Operating System
Objectives of OS
The primary goals of an operating system are as follows:
1. Convenience – An operating system improves the use of a machine. Operating
systems enable users to get started on the things they wish to complete quickly
without having to cope with the stress of first configuring the system.
2. Efficiency – An operating system enables the efficient use of resources. This is due to
less time spent configuring the system.
3. Ability to evolve – An operating system should be designed in such a way that it
allows for the effective development, testing, and introduction of new features
without interfering with service.
4. Management of system resources – It guarantees that resources are shared fairly
among various processes and users.
Operating system services:
User interface
Program execution
Job Accounting
I/O operations
File System manipulation
Communication
Error Detection
Resource Allocation
Protection
1. User interface:
Operating system provide act as an interface between user and system hardware. It make
easy and convenient for users to access system resources with help of different interface
mechanism. UI, GUI and CLI are the common way of interacting with the system.
2. Program execution:
Operating systems handle many kinds of activities from user programs to system
programs. The system must be able to load a program into memory and to run that
program. The program must be able to end its execution, either normally or abnormally.
Provides a mechanism for process synchronization, process communication and deadlock
handling.
3. Job Accounting – As the operating system keeps track of all the functions of a computer
system. Hence, it makes a record of all the activities taking place on the system. It has an
account of all the information about the memory, resources, errors, etc. Therefore, this
information can be used as and when required.
4. I/O Operation
An I/O subsystem comprises of I/O devices and their corresponding driver software. An
Operating System manages the communication between user and device drivers.
5. File system manipulation:
A file represents a collection of related information. Computers can store files on the
disk. Many operating systems provide a variety of file systems. Operating system provide
various activities with respect to file management.
6. Communication
In case of distributed systems which are a collection of processors that do not share
memory, peripheral devices, or a clock, the operating system manages communications
between all the processes. Multiple processes communicate with one another through
communication lines in the network.
7. Error handling
Errors can occur anytime and anywhere. An error may occur in CPU, in I/O devices or in
the memory hardware. OS constantly checks for possible errors and takes an appropriate
action to ensure correct and consistent computing.
8. Resource Management
In case of multi-user or multi-tasking environment, resources such as main memory, CPU
cycles and files storage are to be allocated to each user or job. The OS manages all kinds
of resources using schedulers.
9. Protection
It provides mechanism or a way to control the access of programs, processes, or users to
the resources defined by a computer system. OS ensures that all access to system
resources is controlled.
Types of Operating System
1. Batch Operating System:
In this system, the OS does not forward the jobs/tasks directly to the CPU. It works by grouping
together similar types of jobs under one category. Further, we name this group as a ‘batch’.
Hence, the name batch OS.
Advantages
• Multiple programs can be executed simultaneously.
• High and efficient CUP utilization
• Increases the Throughput of the System.
• It helps in reducing the response time.
Disadvantages
• There is not any facility for user interaction of system resources with the system.
• CPU scheduling is required.
• Many management is also required to handle many jobs.
• Synchronization and IPC mechanism is needed.
Advantages
• Each task gets an equal opportunity.
• Provide user an interface to interact with system.
• CPU idle time can be reduced.
• User get quick response.
• Resources can be shared.
Disadvantages
• More complex to implement.
• One must have to take care of the security and integrity of user programs and data.
• Reliability problem.
• Data communication problem.
Advantages
• Efficient resource sharing.
• Uses load balancing to share workload
• Reliability of the network
• Exchange of information through communication
• Maintain coordination among programs using IPC mechanism.
Disadvantages
• Complexity in implantation.
• Protection to shared resources is needed.
• Require memory and resource management.
Advantages
• Highly stable centralized servers.
• Security concerns are handled through servers.
• New technologies and hardware up-gradation are easily integrated into the system.
• Server access is possible remotely from different locations and types of systems.
Disadvantages
• Servers are costly.
• User has to depend on a central location for most operations.
• Maintenance and updates are required regularly.
The OS is responsible for the following activities in connection with memory management.
• Keeping track of which parts of memory are currently being used & by whom.
• Deciding which processes are to be loaded into memory when memory space becomes
available.
• Allocating &deallocating memory space as needed.
3. File Management:
File management is one of the most important components of an OS computer can store
information on several different types of physical media magnetic tape, magnetic disk & optical
disk are the most common media.
The OS is responsible for the following activities of file management.
• Creating & deleting files.
• Creating & deleting directories.
• Supporting primitives for manipulating files & directories.
• Mapping files into secondary storage.
• Backing up files on non-volatile media.
System Calls:
System calls provide the interface between a process & the OS. These are usually available in the
form of assembly language instruction. Some systems allow system calls to be made directly
from a high level language program like C, BCPL and PERL etc. systems calls occur in different
ways depending on the computer in use. System calls can be roughly grouped into 5 major
categories.
1. Process Control:
• End, abort: A running program needs to be able to has its execution either normally
(end) or abnormally (abort).
• Load, execute: A process or job executing one program may want to load and executes
another program.
• Create Process, terminate process: There is a system call specifying for the purpose of
creating a new process or job (create process or submit job). We may want to terminate a
job or process that we created (terminates process, if we find that it is incorrect or no
longer needed).
• Get process attributes, set process attributes: If we create a new job or process we
should able to control its execution. This control requires the ability to determine & reset
the attributes of a job or processes (get process attributes, set process attributes).
2. File Manipulation:
• Create file, delete file: We first need to be able to create & delete files. Both the system
calls require the name of the file & some of its attributes.
• Open file, close file: Once the file is created, we need to open it & use it. We close the
file when we are no longer using it.
• Read, write, reposition file: After opening, we may also read, write or reposition the file
(rewind or skip to the end of the file).
• Get file attributes, set file attributes: For either files or directories, we need to be able
to determine the values of various attributes & reset them if necessary. Two system calls
get file attribute & set file attributes are required for their purpose.
3. Device Management:
• Request device, release device: If there are multiple users of the system, we first request
the device. After we finished with the device, we must release it.
• Read, write, reposition: Once the device has been requested & allocated to us, we can
read, write & reposition the device.
4. Information maintenance:
• Get time or date, set time or date: Most systems have a system call to return the current
date & time or set the current date & time.
• Get system data, set system data: Other system calls may return information about the
system like number of current users, version number of OS, amount of free memory etc.
• Get process attributes, set process attributes: The OS keeps information about all its
processes & there are system calls to access this information.
Communication:
There are two modes of communication such as:
• Message passing model: Information is exchanged through an inter process communication
facility provided by operating system. Each computer in a network has a name by which it is
known. Similarly, each process has a process name which is translated to an equivalent identifier
by which the OS can refer to it. The get hostid and get processed systems calls to do this
translation. These identifiers are then passed to the general purpose open & close calls provided
by the file system or to specific open connection system call. The recipient process must give its
permission for communication to take place with an accept connection call. The source of the
communication known as client & receiver known as server exchange messages by read message
& write message system calls. The close connection call terminates the connection.
• Shared memory model: processes use map memory system calls to access regions of memory
owned by other processes. They exchange information by reading & writing data in the shared
areas. The processes ensure that they are not writing to the same location simultaneously.
Simple structure:
The Simple structured operating systems do not have a well-defined structure. These systems
will be simple, small and limited systems.
Example: MS-DOS
o There are four layers that make up the MS-DOS operating system, and each has its own
set of features.
o These layers include ROM BIOS device drivers, MS-DOS device drivers, application
programs, and system programs.
o The MS-DOS operating system benefits from layering because each level can be defined
independently and, when necessary, can interact with one another.
o If the system is built in layers, it will be simpler to design, manage, and update. Because
of this, simple structures can be used to build constrained systems that are less complex.
o When a user program fails, the operating system as whole crashes.
o Because MS-DOS systems have a low level of abstraction, programs and I/O procedures
are visible to end users, giving them the potential for unwanted access.
MONOLITHIC STRUCTURE
The monolithic operating system controls all aspects of the operating system's operation,
including file management, memory management, device management, and operational
operations.
The core of an operating system for computers is called the kernel (OS). All other
System components are provided with fundamental services by the kernel. The
operating system and the hardware use it as their main interface. When an operating
system is built into a single piece of hardware, such as a keyboard or mouse, the kernel
can directly access all of its resources.
The monolithic operating system is often referred to as the monolithic kernel. Multiple
programming techniques such as batch processing and time-sharing increase a processor's
usability. Working on top of the operating system and under complete command of all
hardware, the monolithic kernel performs the role of a virtual computer. This is an old operating
system that was used in banks to carry out simple tasks like batch processing and time-sharing,
which allows numerous users at different terminals to access the Operating System.
o Because layering is unnecessary and the kernel alone is responsible for managing
all operations, it is easy to design and execute.
o Due to the fact that functions like memory management, file management,
process scheduling, etc., are implemented in the same address area, the
monolithic kernel runs rather quickly when compared to other systems. Utilizing
the same address speeds up and reduces the time required for address allocation
for new processes.
LAYERED STRUCTURE
The OS is separated into layers or levels in this kind of arrangement. Layer 0 (the lowest
layer) contains the hardware, and layer 1 (the highest layer) contains the user interface
(layer N). These layers are organized hierarchically, with the top-level layers making use of
the capabilities of the lower-level ones.
The functionalities of each layer are separated in this method, and abstraction is also an
option. Because layered structures are hierarchical, debugging is simpler, therefore all
lower-level layers are debugged before the upper layer is examined. As a result, the
present layer alone has to be reviewed since all the lower layers have already been
examined.
o Work duties are separated since each layer has its own functionality, and there is
some amount of abstraction.
o Debugging is simpler because the lower layers are examined first, followed by the
top layers.
MICRO-KERNEL STRUCTURE
The operating system is created using a micro-kernel framework that strips the kernel of
any unnecessary parts. Systems and user applications are used to implement these
optional kernel components. So, Micro-Kernels is the name given to these systems that
have been developed.
Each Micro-Kernel is created separately and is kept apart from the others. As a result, the system
is now more trustworthy and secure. If one Micro-Kernel malfunctions, the remaining operating
system is unaffected and continues to function normally.
Process Concept
2 Heap
This is dynamically allocated memory to a process during its run time.
3 Text
This includes the current activity represented by the value of Program
Counter and the contents of the processor's registers.
4 Data
This section contains the global and static variables.
Process State:
Processes may be in one of 5 states.
o New - The process is in the stage of being created.
o Ready - The process has all the resources available that it needs to run, but the
CPU is not currently working on this process's instructions.
o Waiting - The process cannot run at the moment, because it is waiting for some
resource to become available or for some event to occur. For example the process
may be waiting for keyboard input, disk access request, inter-process messages, a
timer to go off, or a child process to finish.
Context switch
• The Context switching is a technique or method used by the operating system to switch a
process from one state to another to execute its function using CPUs in the system.
• It is a method to store/restore the state or of a CPU in PCB. So that process execution can
be resumed from the same point at a later time. The context switching method is
important for multitasking OS.
Process scheduling:
• It allows OS to allocate a time interval of CPU execution for each process. Another
important reason for using a process scheduling system is that it keeps the CPU busy all
the time.
• This allows you to get the minimum response time for programs.
• Thread is a light weight process which holds less number of resources. In multithreaded
OS a process can have single or many no. of threads.
• The objective of multiprogramming is to have some process running at all times, to
maximize CPU utilization.
• The objective of time sharing is to switch the CPU among processes so frequently that
users can interact with each program white it is running.
Advantages
1. It has a minimal scheduling burden or overhead.
2. It is a very easy to implement and low cost.
3. Less computational resources are used.
4. It has a high throughput rate.
Disadvantages
1. Shortest jobs are made to wait for longer jobs.
2. Less efficient approach.
Preemptive Scheduling
Preemptive scheduling is used when a process switches from the running state to the
ready state or from the waiting state to the ready state. The resources (mainly CPU
cycles) are allocated to the process for a limited amount of time and then taken
away, and the process is again placed back in the ready queue if that process still has
CPU burst time remaining. That process stays in the ready queue till it gets its next
chance to execute.
Advantages
1. It is a more reliable method.
2. The average response time is improved.
3. The operating system makes sure that every process using the CPU is using the
same amount of CPU time.
Disadvantages
1. Suspending the running process, change the context, and dispatch the new
incoming process all take more time.
2. The low-priority process would have to wait if multiple high-priority processes
arrived at the same time.
3. Implementation is difficult and high cost.
Process Schedulers:
A scheduler is a type of system software that allows OS to handle and monitor process
scheduling.
There are mainly three types of Process Schedulers:
Long Term Scheduler
Short Term Scheduler
Medium Term Scheduler
1. Long-term scheduler (Job scheduler)
• It selects which processes should be brought into the ready queue.
• Long-term scheduler is invoked infrequently (seconds, minutes) (may be slow)
• The long-term scheduler controls the degree of multiprogramming.
• Long-term scheduler should select a good process mix to improve performance
2. Short-term scheduler (or CPU scheduler) –
It selects which process should be executed next and allocates CPU.
Sometimes the only scheduler in a system.
Short-term scheduler is invoked frequently (milliseconds) (must be fast)
Processes can be described as either:
I/O-bound process – spends more time doing I/O than computations, many short CPU bursts
CPU-bound process – spends more time doing computations; few very long CPU bursts
3. Medium-term scheduler
• It can be added if degree of multiple programming needs to decrease
• Remove process from memory, store on disk, bring back in from disk to continue
execution: swapping
Long term scheduler Medium term scheduler Short term scheduler
Long term scheduler is a job Medium term is a process of Short term scheduler is called
scheduler. swapping schedulers. a CPU scheduler.
The speed of long term is lesser The speed of medium term is in The speed of short term is
than the short term. between short and long term fastest among the other two.
scheduler.
Long term controls the degree of Medium term reduces the The short term provides
multiprogramming. degree of multiprogramming. lesser control over the degree
of multiprogramming.
The long term is almost nil or The medium term is a part of Short term is also a minimal
minimal in the time sharing the time sharing system. time sharing system.
system.
The long term selects the Medium term can reintroduce Short term selects those
processes from the pool and the process into memory and processes that are ready to
loads them into memory for execution can be continued. execute.
execution.
Message passing:
In this method communication takes place with other processes is by sending
messages. When two or more processes participate in inter-process communication, each process
sends messages to the others via Kernel and system calls. It is useful for exchanging smaller
amount of data. It is also easier to implement and no much overhead involved. It allows
synchronization among processes and hence there will be no conflict about the data. However it
will be less effective in terms of more time consumption since use of system calls.
Shared Memory:
In this method a common memory region is shared between all the cooperating processes.
Process can exchange information by reading and writing data to the shared region directly with
or without need of system calls. It allows maximum speed and convenience of communications.
Since all process share the memory the operating system has to provide security and
synchronization about the information as well as communication.
Multithreaded programming
A thread is a basic unit of CPU utilization; it comprises a thread ID, a program counter, a register
set, and a stack. A traditional (or heavyweight) process has a single thread of control. If a process
has multiple threads of control, it can perform more than one task at a time. The difference
between a traditional single-threaded process and a multithreaded process shown below.
1. Responsiveness:
Multithreading an interactive application may allow a program to continue running even if part
of it is blocked or is performing a lengthy operation, thereby increasing responsiveness to the
user.
2. Resource sharing. Processes can only share resources through techniques such as shared
memory and message passing. Threads share the memory and the resources of the process to
which they belong by default. The benefit of sharing code and data is that it allows an application
to have several different threads of activity within the same address space.
3. Economy. Allocating memory and resources for process creation is costly. Because threads
share the resources of the process to which they belong, it is more economical to create and
context-switch threads.
Multicore Programming
Computer systems with more than one core or CPU are called multi-core or multi-processor
systems. Multiple computing cores are designed on a single chip. Each core appears as a
separate processor to the operating system. Multithreaded programming provides a
mechanism for more efficient use of these multiple computing cores and improved
concurrency.
Multithreading Models:
In multi-threaded systems threads are identified as user threads and kernel threads. User
threads are supported above the kernel and are managed without kernel support, whereas
kernel threads are supported and managed directly by the operating system. There are
different types of multi-threaded models used such as: many-to-one model, one-to-one
model, and many-to-many model.
1) Many-to-One Model:
The many-to-one model maps many user-level threads to one kernel thread. Thread
management is done by the thread library in user space, so it is efficient. Multiple threads
may not run in parallel on muticore system because only one may be in kernel at a time.
One thread blocking causes all to block. Few systems currently use this model.
Examples: Solaris Green Threads, GNU Portable Threads
2) One-to-One Model:
The one-to-one model maps each user-level thread to a separate kernel thread. Whenever
a user-level thread is created OS creates a kernel thread associated to it. It provides more
concurrency than many-to-one model. Number of threads per process sometimes
restricted due to overhead.
Examples: Windows, Linux, Solaris 9 and later
3) Many-to-Many Model:
This model allows many user level threads to be mapped to many kernel threads. It also
allows the operating system to create a sufficient number of kernel threads to support user
threads. The number of kernel threads may be specific to either a particular application or
a particular machine.
Example: Solaris prior to version 9, Windows with the ThreadFiber package.