COM 311 - Operating Systems Week 1-5

Download as pdf or txt
Download as pdf or txt
You are on page 1of 19

1.0.

Operating Systems

Introduction:

An operating system is a program that acts as an interface between the user and the computer hardware
and controls the execution of all kinds of programs.

An Operating System (OS) is an interface between a computer user and computer hardware. An operating
system is a software which performs all the basic tasks like file management, memory management,
process management, handling input and output, and controlling peripheral devices such as disk drives
and printers.

An operating system is software that enables applications to interact with a computer's hardware. The
software that contains the core components of the operating system is called the kernel.

The primary purposes of an Operating System are to enable applications (softwares) to interact with a
computer's hardware and to manage a system's hardware and software resources.

Some popular Operating Systems include Linux Operating System, Windows Operating System, VMS,
OS/400, AIX, z/OS, etc. Today, Operating systems is found almost in every device like mobile phones,
personal computers, mainframe computers, automobiles, TV, Toys etc.

An operating system is a system software that acts as an interface between user and hardware. A
software is a set of tested programs with documentation (user manual). Software is of two types:

1. Application Software – Software designed for a specific task e.g. windows media player
2. System Software – Software that operates the hardware. Software that provides the platform
for the application software to run. An operating system is a system software

An operating system is concerned with the allocation of resources and services, such as memory,
processors, devices, and information. The operating system correspondingly includes programs to
manage these resources, such as a traffic controller, a scheduler, a memory management module, I/O
programs, and a file system.

Types of interface:

GUI – Graphic User Interface (Access is by images or icons)

CUI – Character User Interface (Access is by commands)

Types of Operating Systems:

1. Batch OS
2. Multi-Programing
3. Multi-tasking OS
4. Multi-processing OS
5. Real time OS
6. Network OS
7. Distributed OS
8. Mobile OS
9. Single User OS
An operating system acts as an intermediary between the user of a computer and computer hardware.
The purpose of an operating system is to provide an environment in which a user can execute programs
conveniently and efficiently.

An operating system is a software that manages computer hardware. The hardware must provide
appropriate mechanisms to ensure the correct operation of the computer system and to prevent user
programs from interfering with the proper operation of the system.

Features of Operating system – Operating system has the following features:

1. Convenience: An OS makes a computer more convenient to use.


2. Efficiency: An OS allows the computer system resources to be used efficiently.
3. Ability to Evolve: An OS should be constructed in such a way as to permit the effective
development, testing, and introduction of new system functions at the same time without
interfering with service.
4. Throughput: An OS should be constructed so that It can give maximum throughput (Number of
tasks per unit time).

Major Functionalities of Operating System:

1. Resource Management: When parallel accessing happens in the OS means when multiple users
are accessing the system the OS works as Resource Manager, Its responsibility is to provide
hardware to the user. It decreases the load in the system.
If two or more users (or processes) need to access the computer resources (CPU, memory, I/O
devices), the OS determines which resource to be allocated and for how long.

2. Process Management: It includes various tasks like scheduling and termination of the process. It
is done with the help of CPU Scheduling algorithms.
3. Storage Management: The file system mechanism used for the management of the storage.
NIFS, CFS, CIFS, NFS, etc. are some file systems. All the data is stored in various tracks of Hard
disks that are all managed by the storage manager. It included Hard Disk.
4. Memory Management: Refers to the management of primary memory. The operating system
has to keep track of how much memory has been used and by whom. It has to decide which
process needs memory space and how much. OS also has to allocate and deallocate the memory
space.
To execute any program, it must be allocated a space in memory. Particularly concerned with
RAM (or main memory). Because the size of main memory is small, all processes cannot be put
into the main memory at the same time. The OS determines which process will be allocated the
main memory and for how long. Which will be moved to secondary memory, etc.

5. Security/Privacy Management: Privacy is also provided by the Operating system by means of


passwords so that unauthorized applications can’t access programs or data. For example,
Windows uses Kerberos authentication to prevent unauthorized access to data.

The process operating system as User Interface:


User

System and application programs

Operating system

Hardware

Architecture:

Evolution of Operating System

Operating systems have been evolving over the years. We can categorise this evaluation based on
different generations which is briefed below:

0th Generation

The term 0th generation is used to refer to the period of development of computing when Charles
Babbage invented the Analytical Engine and later John Atanasoff created a computer in 1940. The
hardware component technology of this period was electronic vacuum tubes. There was no Operating
System available for this generation computer and computer programs were written in machine language.
This computers in this generation were inefficient and dependent on the varying competencies of the
individual programmer as operators.

First Generation (1951-1956)


The first generation marked the beginning of commercial computing including the introduction of Eckert
and Mauchly’s UNIVAC I in early 1951, and a bit later, the IBM 701.

System operation was performed with the help of expert operators and without the benefit of an
operating system for a time though programs began to be written in higher level, procedure-oriented
languages, and thus the operator’s routine expanded. Later mono-programmed operating system was
developed, which eliminated some of the human intervention in running job and provided programmers
with a number of desirable functions. These systems still continued to operate under the control of a
human operator who used to follow a number of steps to execute a program. Programming language like
FORTRAN was developed by John W. Backus in 1956.

Second Generation (1956-1964)

The second generation of computer hardware was most notably characterised by transistors replacing
vacuum tubes as the hardware component technology. The first operating system GMOS was developed
by the IBM computer. GMOS was based on single stream batch processing system, because it collects all
similar jobs in groups or batches and then submits the jobs to the operating system using a punch card to
complete all jobs in a machine. Operating system is cleaned after completing one job and then continues
to read and initiates the next job in punch card.

Researchers began to experiment with multiprogramming and multiprocessing in their computing


services called the time-sharing system. A noteworthy example is the Compatible Time Sharing System
(CTSS), developed at MIT during the early 1960s.

Third Generation (1964-1979)

The third generation officially began in April 1964 with IBM’s announcement of its System/360 family of
computers. Hardware technology began to use integrated circuits (ICs) which yielded significant
advantages in both speed and economy. Operating system development continued with the introduction
and widespread adoption of multiprogramming. The idea of taking fuller advantage of the computer’s
data channel I/O capabilities continued to develop.

Another progress which leads to developing of personal computers in fourth generation is a new
development of minicomputers with DEC PDP-1. The third generation was an exciting time, indeed, for
the development of both computer hardware and the accompanying operating system.

Fourth Generation (1979 – Present)

The fourth generation is characterised by the appearance of the personal computer and the workstation.
The component technology of the third generation, was replaced by very large scale integration (VLSI).
Many Operating Systems which we are using today like Windows, Linux, MacOS etc developed in the
fourth generation.

The following are some of important functions of an operating System:

1. Memory Management
2. Processor Management
3. Device Management
4. File Management
5. Network Management
6. Security
7. Control over system performance
8. Job accounting
9. Error detecting aids
10. Coordination between other software and users

Memory Management

Memory management refers to management of Primary Memory or Main Memory. Main memory is a
large array of words or bytes where each word or byte has its own address.

Main memory provides a fast storage that can be accessed directly by the CPU. For a program to be
executed, it must be in the main memory. An Operating System does the following activities for memory
management −

1. Keeps tracks of primary memory, i.e., what part of it are in use by whom, what part are not in use.
2. In multiprogramming, the OS decides which process will get memory when and how much.
3. Allocates the memory when a process requests it to do so.
4. De-allocates the memory when a process no longer needs it or has been terminated.

Processor Management

In multiprogramming environment, the OS decides which process gets the processor when and for how
much time. This function is called process scheduling. An Operating System does the following activities
for processor management:

1. Keeps tracks of processor and status of process. The program responsible for this task is known
as traffic controller.
2. Allocates the processor (CPU) to a process.
3. De-allocates processor when a process is no longer required.

Device Management

An Operating System manages device communication via their respective drivers. It does the following
activities for device management −

1. Keeps tracks of all devices. Program responsible for this task is known as the I/O controller.
2. Decides which process gets the device when and for how much time.
3. Allocates the device in the efficient way.
4. De-allocates devices.

File Management

A file system is normally organized into directories for easy navigation and usage. These directories may
contain files and other directions.

An Operating System does the following activities for file management −

1. Keeps track of information, location, uses, status etc. The collective facilities are often known as
file system.
2. Decides who gets the resources.
3. Allocates the resources.
4. De-allocates the resources.

Other Important Activities

Following are some of the important activities that an Operating System performs −

Security − By means of password and similar other techniques, it prevents unauthorized access to
programs and data.

Control over system performance − Recording delays between request for a service and response from
the system.

Job accounting − Keeping track of time and resources used by various jobs and users.

Error detecting aids − Production of dumps, traces, error messages, and other debugging and error
detecting aids.

Coordination between other softwares and users − Coordination and assignment of compilers,
interpreters, assemblers and other software to the various users of the computer systems.

2.0. OS Processes – Structure, Functions and Philosophy of Operating Systems

A process is program or a fraction of a program that is loaded in main memory. A process needs certain
resources including CPU time, Memory, Files, and I/O devices to accomplish its task. The process
management component manages the multiple processes running simultaneously on the Operating
System.

A program in running state is called a process.

The operating system is responsible for the following activities in connection with process management:

- Create, load, execute, suspend, resume, and terminate processes.


- Switch system among multiple processes in main memory.
- Provides communication mechanisms so that processes can communicate with each other.
- Provides synchronization mechanisms to control concurrent access to shared data to keep shared
data consistent.
- Allocate/de-allocate resources properly to prevent or avoid deadlock situation.

A process is basically a program in execution. The execution of a process must progress in a sequential
fashion.

A process is defined as an entity which represents the basic unit of work to be implemented in the
system.

To put it in simple terms, we write our computer programs in a text file and when we execute this
program, it becomes a process which performs all the tasks mentioned in the program.

When a program is loaded into the memory and it becomes a process, it can be divided into four sections
─ stack, heap, text and data. The following image shows a simplified layout of a process inside main
memory –

S.N. Component & Description

1
Stack
The process Stack contains the temporary data such as method/function parameters,
return address and local variables.

2
Heap
This is dynamically allocated memory to a process during its run time.

3
Text
This includes the current activity represented by the value of Program Counter and the
contents of the processor's registers.

4
Data
This section contains the global and static variables.

Process Life Cycle

When a process executes, it passes through different states. These stages may differ in different operating
systems, and the names of these states are also not standardized. In general, a process can have one of
the following five states at a time.

S.N. State & Description

1
Start
This is the initial state when a process is first started/created.

2
Ready
The process is waiting to be assigned to a processor. Ready processes are waiting to
have the processor allocated to them by the operating system so that they can run.
Process may come into this state after Start state or while running it by but interrupted
by the scheduler to assign CPU to some other process.

3
Running
Once the process has been assigned to a processor by the OS scheduler, the process
state is set to running and the processor executes its instructions.

4
Waiting
Process moves into the waiting state if it needs to wait for a resource, such as waiting
for user input, or waiting for a file to become available.

5
Terminated or Exit
Once the process finishes its execution, or it is terminated by the operating system, it is
moved to the terminated state where it waits to be removed from main memory.

Process Control Block (PCB)

A Process Control Block is a data structure maintained by the Operating System for every process. The
PCB is identified by an integer process ID (PID). A PCB keeps all the information needed to keep track of a
process as listed below in the table –

S.N. Information & Description

1
Process State
The current state of the process i.e., whether it is ready, running, waiting, or whatever.

2
Process privileges
This is required to allow/disallow access to system resources.

3
Process ID
Unique identification for each of the process in the operating system.

4
Pointer
A pointer to parent process.

5
Program Counter
Program Counter is a pointer to the address of the next instruction to be executed for
this process.

6
CPU registers
Various CPU registers where process need to be stored for execution for running state.

7
CPU Scheduling Information
Process priority and other scheduling information which is required to schedule the
process.

8
Memory management information
This includes the information of page table, memory limits, Segment table depending
on memory used by the operating system.

9
Accounting information
This includes the amount of CPU used for process execution, time limits, execution ID
etc.

10
IO status information
This includes a list of I/O devices allocated to the process.

The architecture of a PCB is completely dependent on Operating System and may


contain different information in different operating systems. Here is a simplified diagram
of a PCB −
The PCB is maintained for a process throughout its lifetime, and is deleted once the process terminates.

Process Scheduling

process scheduling is the activity of the process manager that handles the removal of the running process
from the CPU and the selection of another process on the basis of a particular strategy.

Process scheduling is an essential part of a Multiprogramming operating systems. Such operating systems
allow more than one process to be loaded into the executable memory at a time and the loaded process
shares the CPU using time multiplexing.

Categories of Scheduling: There are two categories of scheduling:

1. Non-preemptive: Here the resource can’t be taken from a process until the process completes
execution. The switching of resources occurs when the running process terminates and moves to
a waiting state.
2. Preemptive: Here the OS allocates the resources to a process for a fixed amount of time. During
resource allocation, the process switches from running state to ready state or from waiting state
to ready state. This switching occurs as the CPU may give priority to other processes and replace
the process with higher priority with the running process.

Process Scheduling Queues

The OS maintains all Process Control Blocks (PCBs) in Process Scheduling Queues. The OS maintains a
separate queue for each of the process states and PCBs of all processes in the same execution state are
placed in the same queue. When the state of a process is changed, its PCB is unlinked from its current
queue and moved to its new state queue.

The Operating System maintains the following important process scheduling queues –

- Job queue − This queue keeps all the processes in the system.
- Ready queue − This queue keeps a set of all processes residing in main memory, ready and waiting
to execute. A new process is always put in this queue.
- Device queues − The processes which are blocked due to unavailability of an I/O device constitute
this queue.

The OS can use different policies to manage each queue (FIFO, Round Robin, Priority, etc.). The OS
scheduler determines how to move processes between the ready and run queues which can only have
one entry per processor core on the system; in the above diagram, it has been merged with the CPU.

Two-State Process Model

Two-state process model refers to running and non-running states which are described below:

S.N. State & Description

1
Running
When a new process is created, it enters into the system as in the running state.

2
Not Running
Processes that are not running are kept in queue, waiting for their turn to execute. Each
entry in the queue is a pointer to a particular process. Queue is implemented by using
linked list. Use of dispatcher is as follows. When a process is interrupted, that process
is transferred in the waiting queue. If the process has completed or aborted, the process
is discarded. In either case, the dispatcher then selects a process from the queue to
execute.

Schedulers

Schedulers are special system software which handle process scheduling in various ways. Their main task
is to select the jobs to be submitted into the system and to decide which process to run. Schedulers are
of three types −

Long Term Scheduler:

It is also called a job scheduler. A long-term scheduler determines which programs are admitted to the
system for processing. It selects processes from the queue and loads them into memory for execution.
Process loads into the memory for CPU scheduling.

The primary objective of the job scheduler is to provide a balanced mix of jobs, such as I/O bound and
processor bound. It also controls the degree of multiprogramming. If the degree of multiprogramming is
stable, then the average rate of process creation must be equal to the average departure rate of processes
leaving the system.

Short Term Scheduler:

It is also called as CPU scheduler. Its main objective is to increase system performance in accordance with
the chosen set of criteria. It is the change of ready state to running state of the process. CPU scheduler
selects a process among the processes that are ready to execute and allocates CPU to one of them.

Short-term schedulers, also known as dispatchers, make the decision of which process to execute next.
Short-term schedulers are faster than long-term schedulers.

Medium Term Scheduler:

Medium-term scheduling is a part of swapping. It removes the processes from the memory. It reduces the
degree of multiprogramming. The medium-term scheduler is in-charge of handling the swapped out-
processes.

A running process may become suspended if it makes an I/O request. A suspended processes cannot make
any progress towards completion. In this condition, to remove the process from memory and make space
for other processes, the suspended process is moved to the secondary storage. This process is called
swapping, and the process is said to be swapped out or rolled out. Swapping may be necessary to improve
the process mix.

Comparison among Schedulers:


S.N. Long-Term Scheduler Short-Term Scheduler Medium-Term Scheduler

1 It is a job scheduler It is a CPU scheduler It is a process swapping


scheduler.

2 Speed is lesser than short Speed is fastest among Speed is in between both
term scheduler other two short and long term
scheduler.

3 It controls the degree of It provides lesser control It reduces the degree of


multiprogramming over degree of multiprogramming.
multiprogramming

4 It is almost absent or It is also minimal in time It is a part of Time sharing


minimal in time sharing sharing system systems.
system

5 It selects processes from It selects those processes It can re-introduce the


pool and loads them into which are ready to process into memory and
memory for execution execute execution can be continued.
A Process Scheduler schedules different processes to be assigned to the CPU based on particular
scheduling algorithms. There are six popular process scheduling algorithms:

- First-Come, First-Served (FCFS) Scheduling


- Shortest-Job-Next (SJN) Scheduling
- Priority Scheduling
- Shortest Remaining Time
- Round Robin(RR) Scheduling
- Multiple-Level Queues Scheduling

These algorithms are either non-preemptive or preemptive. Non-preemptive algorithms are designed so
that once a process enters the running state, it cannot be preempted until it completes its allotted time,
whereas the preemptive scheduling is based on priority where a scheduler may preempt a low priority
running process anytime when a high priority process enters into a ready state.

First Come First Serve (FCFS)

- Jobs are executed on first come, first serve basis (Assigns CPU to the process that come first)
- It is a non-preemptive algorithm.
- Easy to understand and implement.
- Its implementation is based on FIFO queue.
- Poor in performance as average wait time is high.

Wait time of each process is as follows –

Process Wait Time : Service Time - Arrival Time

P0 0-0=0

P1 5-1=4
P2 8-2=6

P3 16 - 3 = 13

Average Wait Time: (0+4+6+13) / 4 = 5.75

Shortest Job Next (SJN)

- This is also known as shortest job first, or SJF (Out of all the available (waiting) processes, it selects
the smallest burst time to execute next).
- This is a non-preemptive, pre-emptive scheduling algorithm.
- Best approach to minimize waiting time.
- Easy to implement in Batch systems where required CPU time is known in advance.
- Impossible to implement in interactive systems where required CPU time is not known.
- The processer should know in advance how much time process will take.

Given: Table of processes, and their Arrival time, Execution time

Process Arrival Time Execution Time Service Time

P0 0 5 0

P1 1 3 5

P2 2 8 14

P3 3 6 8
Waiting time of each process is as follows –

Process Waiting Time

P0 0-0=0

P1 5-1=4

P2 14 - 2 = 12

P3 8-3=5

Average Wait Time: (0 + 4 + 12 + 5)/4 = 21 / 4 = 5.25

Priority Based Scheduling

- Priority scheduling is a non-preemptive algorithm and one of the most common scheduling
algorithms in batch systems.
- Each process is assigned a priority. Process with highest priority is to be executed first and so on.
- Processes with same priority are executed on first come first served basis.
- Priority can be decided based on memory requirements, time requirements or any other resource
requirement.

Given: Table of processes, and their Arrival time, Execution time, and priority. Here we are considering 1
is the lowest priority.

Process Arrival Time Execution Time Priority Service Time

P0 0 5 1 0

P1 1 3 2 11

P2 2 8 1 14

P3 3 6 3 5
Waiting time of each process is as follows –

Process Waiting Time

P0 0-0=0

P1 11 - 1 = 10

P2 14 - 2 = 12

P3 5-3=2
Average Wait Time: (0 + 10 + 12 + 2)/4 = 24 / 4 = 6

Shortest Remaining Time

- Shortest remaining time (SRT) is the preemptive version of the SJN algorithm.
- The processor is allocated to the job closest to completion but it can be preempted by a newer
ready job with shorter time to completion.
- Impossible to implement in interactive systems where required CPU time is not known.
- It is often used in batch environments where short jobs need to give preference.

Round Robin Scheduling

- Round Robin is the preemptive process scheduling algorithm.


- Each process is provided a fix time to execute, it is called a time quantum.
- Once a process is executed for a given time period, it is preempted, and other process executes
for a given time period.
- Context switching is used to save states of preempted processes.

Wait time of each process is as follows –

Process Wait Time : Service Time - Arrival Time

P0 (0 - 0) + (12 - 3) = 9

P1 (3 - 1) = 2

P2 (6 - 2) + (14 - 9) + (20 - 17) = 12

P3 (9 - 3) + (17 - 12) = 11

Average Wait Time: (9+2+12+11) / 4 = 8.5

Multiple-Level Queues Scheduling

Multiple-level queues are not an independent scheduling algorithm. They make use of other existing
algorithms to group and schedule jobs with common characteristics.

- Multiple queues are maintained for processes with common characteristics.


- Each queue can have its own scheduling algorithms.
- Priorities are assigned to each queue.

For example, CPU-bound jobs can be scheduled in one queue and all I/O-bound jobs in another queue.
The Process Scheduler then alternately selects jobs from each queue and assigns them to the CPU based
on the algorithm assigned to the queue.

3.0. Interrupt & Masking Traps


4.0. Operating System Kernel
5.0. Operating System Commands

You might also like