0% found this document useful (0 votes)
63 views

Operating System Notes

Uploaded by

sra881010
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
63 views

Operating System Notes

Uploaded by

sra881010
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 23

What do you mean by Operating system?

No matter its size and application, every computer needs an operating system to make it functional
and useful. The operating system is an integral part of modern computer systems. It is a well-
organized collection of programs that manages the hardware.

An Operating System provides an interaction between the users and computer hardware. A user is a
person sitting at the computer terminal concerned about the application rather than the architecture
of the computer. The user never interacts with the hardware directly. To get the services of the
hardware, he has to request through the operating system.

The operating system is a primary resource manager. It manages the hardware, including processors,
memory, Input-Output devices, and communication devices.

40.7M

904

Features of Java - Javatpoint

Next

Stay

The operating system operates either in kernel mode or user mode. Compilers and editors run in user
mode, whereas operating system code runs in kernel mode.
Types of Operating Systems
An operating system is a well-organized collection of programs that manages the computer
hardware. It is a type of system software that is responsible for the smooth functioning of the
computer system.

Batch Operating System


In the 1970s, Batch processing was very popular. In this technique, similar types of jobs were batched
together and executed in time. People were used to having a single computer which was called a
mainframe.

In Batch operating system, access is given to more than one person; they submit their respective jobs
to the system for the execution.
The system put all of the jobs in a queue on the basis of first come first serve and then executes the
jobs one by one. The users collect their respective output when all the jobs get executed.

Competitive questions on Structures in HindiKeep Watching

The purpose of this operating system was mainly to transfer control from one job to another as soon
as the job was completed. It contained a small set of programs called the resident monitor that
always resided in one part of the main memory. The remaining part is used for servicing jobs.

Advantages of Batch OS
o The use of a resident monitor improves computer efficiency as it eliminates CPU time between
two jobs.

Disadvantages of Batch OS
1. Starvation

Batch processing suffers from starvation.

For Example:

There are five jobs J1, J2, J3, J4, and J5, present in the batch. If the execution time of J1 is very high,
then the other four jobs will never be executed, or they will have to wait for a very long time. Hence
the other processes get starved.

2. Not Interactive

Batch Processing is not suitable for jobs that are dependent on the user's input. If a job requires the
input of two numbers from the console, then it will never get it in the batch processing scenario since
the user is not present at the time of execution.

Multiprogramming Operating System


Multiprogramming is an extension to batch processing where the CPU is always kept busy. Each
process needs two types of system time: CPU time and IO time.

In a multiprogramming environment, when a process does its I/O, The CPU can start the execution of
other processes. Therefore, multiprogramming improves the efficiency of the system.
Advantages of Multiprogramming OS

o Throughout the system, it increased as the CPU always had one program to execute.
o Response time can also be reduced.

Disadvantages of Multiprogramming OS

o Multiprogramming systems provide an environment in which various systems resources are


used efficiently, but they do not provide any user interaction with the computer system.

Multiprocessing Operating System


In Multiprocessing, Parallel computing is achieved. There are more than one processors present in the
system which can execute more than one process at the same time. This will increase the throughput
of the system.
In Multiprocessing, Parallel computing is achieved. More than one processor present in the system
can execute more than one process simultaneously, which will increase the throughput of the system.

Advantages of Multiprocessing operating system:

o Increased reliability: Due to the multiprocessing system, processing tasks can be distributed
among several processors. This increases reliability as if one processor fails, the task can be
given to another processor for completion.
o Increased throughout: As several processors increase, more work can be done in less.

Disadvantages of Multiprocessing operating System

o Multiprocessing operating system is more complex and sophisticated as it takes care of


multiple CPUs simultaneously.

Multitasking Operating System


The multitasking operating system is a logical extension of a multiprogramming system that
enables multiple programs simultaneously. It allows a user to perform more than one computer task
at the same time.

Advantages of Multitasking operating system

o This operating system is more suited to supporting multiple users simultaneously.


o The multitasking operating systems have well-defined memory management.

Disadvantages of Multitasking operating system

o The multiple processors are busier at the same time to complete any task in a multitasking
environment, so the CPU generates more heat.

Network Operating System


An Operating system, which includes software and associated protocols to communicate with other
computers via a network conveniently and cost-effectively, is called Network Operating System.

Advantages of Network Operating System

o In this type of operating system, network traffic reduces due to the division between clients
and the server.
o This type of system is less expensive to set up and maintain.

Disadvantages of Network Operating System

o In this type of operating system, the failure of any node in a system affects the whole system.
o Security and performance are important issues. So trained network administrators are required
for network administration.

Real Time Operating System


In Real-Time Systems, each job carries a certain deadline within which the job is supposed to be
completed, otherwise, the huge loss will be there, or even if the result is produced, it will be
completely useless.

The Application of a Real-Time system exists in the case of military applications, if you want to drop a
missile, then the missile is supposed to be dropped with a certain precision.

Advantages of Real-time operating system:

o Easy to layout, develop and execute real-time applications under the real-time operating
system.
o In a Real-time operating system, the maximum utilization of devices and systems.

Disadvantages of Real-time operating system:


o Real-time operating systems are very costly to develop.
o Real-time operating systems are very complex and can consume critical CPU cycles.

Time-Sharing Operating System


In the Time Sharing operating system, computer resources are allocated in a time-dependent fashion
to several programs simultaneously. Thus it helps to provide a large number of user's direct access to
the main computer. It is a logical extension of multiprogramming. In time-sharing, the CPU is
switched among multiple programs given by different users on a scheduled basis.

A time-sharing operating system allows many users to be served simultaneously, so sophisticated


CPU scheduling schemes and Input/output management are required.

Time-sharing operating systems are very difficult and expensive to build.

Advantages of Time Sharing Operating System

o The time-sharing operating system provides effective utilization and sharing of resources.
o This system reduces CPU idle and response time.

Disadvantages of Time Sharing Operating System

o Data transmission rates are very high in comparison to other methods.


o Security and integrity of user programs loaded in memory and data need to be maintained as
many users access the system at the same time.

Distributed Operating System


The Distributed Operating system is not installed on a single machine, it is divided into parts, and
these parts are loaded on different machines. A part of the distributed Operating system is installed
on each machine to make their communication possible. Distributed Operating systems are much
more complex, large, and sophisticated than Network operating systems because they also have to
take care of varying networking protocols.

Advantages of Distributed Operating System

o The distributed operating system provides sharing of resources.


o This type of system is fault-tolerant.

Disadvantages of Distributed Operating System

o Protocol overhead can dominate computation cost.

Following are the services provided by an operating system -

o Program execution
o Control Input/output devices
o Program creation
o Error Detection and Response
o Accounting
o Security and Protection
o File Management
o Communication

Program execution
To execute a program, several tasks need to be performed. Both the instructions and data must be
loaded into the main memory. In addition, input-output devices and files should be initialized, and
other resources must be prepared. The Operating structures handle these kinds of tasks. The user
now no longer should fear the reminiscence allocation or multitasking or anything.

Control Input/output devices

As there are numerous types of I/O devices within the computer system, and each I/O device calls for
its own precise set of instructions for the operation. The Operating System hides that info with the aid
of presenting a uniform interface. Thus, it is convenient for programmers to access such devices
easily.

Program Creation

The Operating system offers the structures and tools, including editors and debuggers, to help the
programmer create, modify, and debugging programs.

Error Detection and Response

An Error in a device may also cause malfunctioning of the entire device. These include hardware and
software errors such as device failure, memory error, division by zero, attempts to access forbidden
memory locations, etc. To avoid error, the operating system monitors the system for detecting errors
and takes suitable action with at least impact on running applications.

While working with computers, errors may occur quite often. Errors may occur in the:

o Input/ Output devices: For example, connection failure in the network, lack of paper in the
printer, etc.
o User program: For example: attempt to access illegal memory locations, divide by zero, use
too much CPU time, etc.
o Memory hardware: For example, Memory error, the memory becomes full, etc.

To handle these errors and other types of possible errors, the operating system takes appropriate
action and generates messages to ensure correct and consistent computing.

Accounting

An Operating device collects utilization records for numerous assets and tracks the overall
performance parameters and responsive time to enhance overall performance. These personal
records are beneficial for additional upgrades and tuning the device to enhance overall performance.

Security and Protection

Operating device affords safety to the statistics and packages of a person and protects any
interference from unauthorized users. The safety feature counters threats, which are published via
way of individuals out of doors the manage of the running device.

For Example:
When a user downloads something from the internet, that program may contain malicious code that
may harm the already existing programs. The operating system ensures that proper checks are
applied while downloading such programs.

If one computer system is shared amongst a couple of users, then the various processes must be
protected from another intrusion. For this, the operating system provides various mechanisms that
allow only those processes to use resources that have gained proper authorization from the
operating system. The mechanism may include providing unique users ids and passwords to each
user.

File management

Computers keep data and information on secondary storage devices like magnetic tape, magnetic
disk, optical disk, etc. Each storage media has its capabilities like speed, capacity, data transfer rate,
and data access methods.

For file management, the operating system must know the types of different files and the
characteristics of different storage devices. It has to offer the proportion and safety mechanism of
documents additionally.

Communication

The operating system manages the exchange of data and programs among different computers
connected over a network. This communication is accomplished using message passing and shared
memory.

There are four generations of operating systems. These can be described as follows −
The First Generation ( 1945 - 1955 ): Vacuum Tubes and Plugboards
Digital computers were not constructed until the second world war. Calculating engines with
mechanical relays were built at that time. However, the mechanical relays were very slow and were
later replaced with vacuum tubes. These machines were enormous but were still very slow.
These early computers were designed, built and maintained by a single group of people.
Programming languages were unknown and there were no operating systems so all the programming
was done in machine language. All the problems were simple numerical calculations.
By the 1950’s punch cards were introduced and this improved the computer system. Instead of using
plugboards, programs were written on cards and read into the system.

The Second Generation ( 1955 - 1965 ): Transistors and Batch Systems


Transistors led to the development of the computer systems that could be manufactured and sold to
paying customers. These machines were known as mainframes and were locked in air-conditioned
computer rooms with staff to operate them.
The Batch System was introduced to reduce the wasted time in the computer. A tray full of jobs was
collected in the input room and read into the magnetic tape. After that, the tape was rewound and
mounted on a tape drive. Then the batch operating system was loaded in which read the first job from
the tape and ran it. The output was written on the second tape. After the whole batch was done, the
input and output tapes were removed and the output tape was printed.

The Third Generation ( 1965 - 1980 ): Integrated Circuits and


Multiprogramming
Until the 1960’s, there were two types of computer systems i.e the scientific and the commercial
computers. These were combined by IBM in the System/360. This used integrated circuits and
provided a major price and performance advantage over the second generation systems.
The third generation operating systems also introduced multiprogramming. This meant that the
processor was not idle while a job was completing its I/O operation. Another job was scheduled on
the processor so that its time would not be wasted.

The Fourth Generation ( 1980 - Present ): Personal Computers


Personal Computers were easy to create with the development of large-scale integrated circuits.
These were chips containing thousands of transistors on a square centimeter of silicon. Because of
these, microcomputers were much cheaper than minicomputers and that made it possible for a single
individual to own one of them.
The advent of personal computers also led to the growth of networks. This created network operating
systems and distributed operating systems. The users were aware of a network while using a
network operating system and could log in to remote machines and copy files from one machine to
another.

Process
A process is basically a program in execution. The execution of a process must progress in a
sequential fashion.
A process is defined as an entity which represents the basic unit of work to be implemented in the system.

To put it in simple terms, we write our computer programs in a text file and when we execute this
program, it becomes a process which performs all the tasks mentioned in the program.
When a program is loaded into the memory and it becomes a process
Program
A program is a piece of code which may be a single line or millions of lines. A computer program is
usually written by a computer programmer in a programming language. For example, here is a
simple program written in C programming language −
#include <stdio.h>

int main() {
printf("Hello, World! \n");
return 0;
}

A computer program is a collection of instructions that performs a specific task when executed by a
computer. When we compare a program with a process, we can conclude that a process is a
dynamic instance of a computer program.
A part of a computer program that performs a well-defined task is known as an algorithm. A
collection of computer programs, libraries and related data are referred to as a software.

Process Life Cycle


When a process executes, it passes through different states. These stages may differ in different
operating systems, and the names of these states are also not standardized.
In general, a process can have one of the following five states at a time.

S.N. State & Description

1
Start
This is the initial state when a process is first started/created.

2
Ready
The process is waiting to be assigned to a processor. Ready processes are waiting to have
the processor allocated to them by the operating system so that they can run. Process may
come into this state after Start state or while running it by but interrupted by the scheduler
to assign CPU to some other process.

3
Running
Once the process has been assigned to a processor by the OS scheduler, the process
state is set to running and the processor executes its instructions.

4
Waiting
Process moves into the waiting state if it needs to wait for a resource, such as waiting for
user input, or waiting for a file to become available.

5
Terminated or Exit
Once the process finishes its execution, or it is terminated by the operating system, it is
moved to the terminated state where it waits to be removed from main memory.

Process Control Block (PCB)


A Process Control Block is a data structure maintained by the Operating System for every process.
The PCB is identified by an integer process ID (PID). A PCB keeps all the information needed to
keep track of a process as listed below in the table −

S.N. Information & Description

1
Process State
The current state of the process i.e., whether it is ready, running, waiting, or whatever.

2
Process privileges
This is required to allow/disallow access to system resources.

3
Process ID
Unique identification for each of the process in the operating system.

4
Pointer
A pointer to parent process.

5
Program Counter
Program Counter is a pointer to the address of the next instruction to be executed for this
process.

6
CPU registers
Various CPU registers where process need to be stored for execution for running state.

7
CPU Scheduling Information
Process priority and other scheduling information which is required to schedule the process.
8
Memory management information
This includes the information of page table, memory limits, Segment table depending on
memory used by the operating system.

9
Accounting information
This includes the amount of CPU used for process execution, time limits, execution ID etc.

10
IO status information
This includes a list of I/O devices allocated to the process.

The architecture of a PCB is completely dependent on Operating System and may contain different
information in different operating systems. Here is a simplified diagram of a PCB −

What is the context switching in the


operating system?
The Context switching is a technique or method used by the operating system to switch a process
from one state to another to execute its function using CPUs in the system. When switching perform
in the system, it stores the old running process's status in the form of registers and assigns the CPU

to a new process to execute its tasks. While a new process is running in the system, the previous process must
wait in a ready queue. The execution of the old process starts at that point where another process stopped it. It
defines the characteristics of a multitasking operating system in which multiple processes shared the same CPU
to perform multiple tasks without the need for additional processors in the system.

The need for Context switching


A context switching helps to share a single CPU across all processes to complete its execution and
store the system's tasks status. When the process reloads in the system, the execution of the process
starts at the same point where there is conflicting.

Following are the reasons that describe the need for context switching in the Operating system.

1. The switching of one process to another process is not directly in the system. A context
switching helps the operating system that switches between the multiple processes to use the
CPU's resource to accomplish its tasks and store its context. We can resume the service of the
process at the same point later. If we do not store the currently running process's data or
context, the stored data may be lost while switching between processes.
2. If a high priority process falls into the ready queue, the currently running process will be shut
down or stopped by a high priority process to complete its tasks in the system.
3. If any running process requires I/O resources in the system, the current process will be
switched by another process to use the CPUs. And when the I/O requirement is met, the old
process goes into a ready state to wait for its execution in the CPU. Context switching stores
the state of the process to resume its tasks in an operating system. Otherwise, the process
needs to restart its execution from the initials level.
4. If any interrupts occur while running a process in the operating system, the process status is
saved as registers using context switching. After resolving the interrupts, the process switches
from a wait state to a ready state to resume its execution at the same point later, where the
operating system interrupted occurs.
5. A context switching allows a single CPU to handle multiple process requests simultaneously
without the need for any additional processors.

Example of Context Switching


Suppose that multiple processes are stored in a Process Control Block (PCB). One process is running
state to execute its task with the use of CPUs. As the process is running, another process arrives in the
ready queue, which has a high priority of completing its task using CPU. Here we used context
switching that switches the current process with the new process requiring the CPU to finish its tasks.
While switching the process, a context switch saves the status of the old process in registers. When
the process reloads into the CPU, it starts the execution of the process when the new process stops
the old process. If we do not save the state of the process, we have to start its execution at the initial
level. In this way, context switching helps the operating system to switch between the processes, store
or reload the process when it requires executing its tasks.

Context switching triggers


Following are the three types of context switching triggers as follows.

1. Interrupts
2. Multitasking
3. Kernel/User switch
Interrupts: A CPU requests for the data to read from a disk, and if there are any interrupts, the
context switching automatic switches a part of the hardware that requires less time to handle the
interrupts.

Multitasking: A context switching is the characteristic of multitasking that allows the process to be
switched from the CPU so that another process can be run. When switching the process, the old state
is saved to resume the process's execution at the same point in the system.

Kernel/User Switch: It is used in the operating systems when switching between the user mode, and
the kernel/user mode is performed.

What is the PCB?


A PCB (Process Control Block) is a data structure used in the operating system to store all data related
information to the process. For example, when a process is created in the operating system, updated
information of the process, switching information of the process, terminated process in the PCB.

Steps for Context Switching


There are several steps involves in context switching of the processes. The following diagram
represents the context switching of two processes, P1 to P2, when an interrupt, I/O needs, or priority-
based process occurs in the ready queue of PCB.

As we can see in the diagram, initially, the P1 process is running on the CPU to execute its task, and at
the same time, another process, P2, is in the ready state. If an error or interruption has occurred or
the process requires input/output, the P1 process switches its state from running to the waiting state.
Before changing the state of the process P1, context switching saves the context of the process P1 in
the form of registers and the program counter to the PCB1. After that, it loads the state of the P2
process from the ready state of the PCB2 to the running state.
The following steps are taken when switching Process P1 to Process 2:

1. First, thes context switching needs to save the state of process P1 in the form of the program
counter and the registers to the PCB (Program Counter Block), which is in the running state.
2. Now update PCB1 to process P1 and moves the process to the appropriate queue, such as the
ready queue, I/O queue and waiting queue.
3. After that, another process gets into the running state, or we can select a new process from the
ready state, which is to be executed, or the process has a high priority to execute its task.
4. Now, we have to update the PCB (Process Control Block) for the selected process P2. It
includes switching the process state from ready to running state or from another state like
blocked, exit, or suspend.
5. If the CPU already executes process P2, we need to get the status of process P2 to resume its
execution at the same time point where the system interrupt occurs.

Similarly, process P2 is switched off from the CPU so that the process P1 can resume execution. P1
process is reloaded from PCB1 to the running state to resume its task at the same point. Otherwise,
the information is lost, and when the process is executed again, it starts execution at the initial level.

Threads in Operating System


A thread is a single sequential flow of execution of tasks of a process so it is also known as thread of
execution or thread of control. There is a way of thread execution inside the process of any operating
system. Apart from this, there can be more than one thread inside a process. Each thread of the same
process makes use of a separate program counter and a stack of activation records and control
blocks. Thread is often referred to as a lightweight process.
The process can be split down into so many threads. For example, in a browser, many tabs can be
viewed as threads. MS Word uses many threads - formatting text from one thread, processing input
from another thread, etc.

Need of Thread:
o It takes far less time to create a new thread in an existing process than to create a new process.
o Threads can share the common data, they do not need to use Inter- Process communication.
o Context switching is faster when working with threads.
o It takes less time to terminate a thread than a process.

Types of Threads
In the operating system, there are two types of threads.

1. Kernel level thread.


2. User-level thread.

User-level thread
The operating system does not recognize the user-level thread. User threads can be easily
implemented and it is implemented by the user. If a user performs a user-level thread blocking
operation, the whole process is blocked. The kernel level thread does not know nothing about the
user level thread. The kernel-level thread manages user-level threads as if they are single-threaded
processes?examples: Java thread, POSIX threads, etc.

6.3M
212
Java Tricky Program 16 - Autoboxing, Inheritance and Overriding

Advantages of User-level threads

1. The user threads can be easily implemented than the kernel thread.
2. User-level threads can be applied to such types of operating systems that do not support
threads at the kernel-level.
3. It is faster and efficient.
4. Context switch time is shorter than the kernel-level threads.
5. It does not require modifications of the operating system.
6. User-level threads representation is very simple. The register, PC, stack, and mini thread control
blocks are stored in the address space of the user-level process.
7. It is simple to create, switch, and synchronize threads without the intervention of the process.

Disadvantages of User-level threads

1. User-level threads lack coordination between the thread and the kernel.
2. If a thread causes a page fault, the entire process is blocked.

Kernel level thread


The kernel thread recognizes the operating system. There is a thread control block and process
control block in the system for each thread and process in the kernel-level thread. The kernel-level
thread is implemented by the operating system. The kernel knows about all the threads and manages
them. The kernel-level thread offers a system call to create and manage the threads from user-space.
The implementation of kernel threads is more difficult than the user thread. Context switch time is
longer in the kernel thread. If a kernel thread performs a blocking operation, the Banky thread
execution can continue. Example: Window Solaris.
Advantages of Kernel-level threads

1. The kernel-level thread is fully aware of all threads.


2. The scheduler may decide to spend more CPU time in the process of threads being large
numerical.
3. The kernel-level thread is good for those applications that block the frequency.

Disadvantages of Kernel-level threads

1. The kernel thread manages and schedules all threads.


2. The implementation of kernel threads is difficult than the user thread.
3. The kernel-level thread is slower than user-level threads.

Components of Threads
Any thread has the following components.

1. Program counter
2. Register set
3. Stack space

Benefits of Threads
o Enhanced throughput of the system: When the process is split into many threads, and each
thread is treated as a job, the number of jobs done in the unit time increases. That is why the
throughput of the system also increases.
o Effective Utilization of Multiprocessor system: When you have more than one thread in one
process, you can schedule more than one thread in more than one processor.
o Faster context switch: The context switching period between threads is less than the process
context switching. The process context switch means more overhead for the CPU.
o Responsiveness: When the process is split into several threads, and when a thread completes
its execution, that process can be responded to as soon as possible.
o Communication: Multiple-thread communication is simple because the threads share the
same address space, while in process, we adopt just a few exclusive communication strategies
for communication between two processes.
o Resource sharing: Resources can be shared between all threads within a process, such as
code, data, and files. Note: The stack and register cannot be shared between threads. There is a
stack and register for each thread.

You might also like