0% found this document useful (0 votes)
25 views21 pages

Operating System

The document provides comprehensive lecture notes on operating systems, detailing their functions, components, and types. It covers key topics such as memory management, processor management, device management, and file management, along with the evolution of operating systems and various scheduling algorithms. Additionally, it discusses different operating system types, including batch, time-sharing, multiprogramming, multitasking, distributed, real-time, and network operating systems.

Uploaded by

sh7861
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
25 views21 pages

Operating System

The document provides comprehensive lecture notes on operating systems, detailing their functions, components, and types. It covers key topics such as memory management, processor management, device management, and file management, along with the evolution of operating systems and various scheduling algorithms. Additionally, it discusses different operating system types, including batch, time-sharing, multiprogramming, multitasking, distributed, real-time, and network operating systems.

Uploaded by

sh7861
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 21

LECTURE NOTES

ON
OPERATING SYSTEMS
BCA-4th Semester
Miss Saira Bi, lecturer

JYOTI COLLEGE OF MANAGEMENT


SCIENCE & TECHNOLOGY BAREILLY

Department of Computer Science


1.1 Introduction: Operating system and function
What is an Operating System?
A program that acts as an intermediary between a user of a computer and the
computer hardware Operating system goals:
Execute user programs and make solving user problems easier
Make the computer system convenient to use
Use the computer hardware in an efficient manner

Definition
An operating system is a program that acts as an interface between the user and the
computer hardware and controls the execution of all kinds of programs.

Following are some of important functions of an operating System.

 Memory Management
 Processor Management
 Device Management
 File Management
Memory Management
Memory management refers to management of Primary Memory or Main Memory.
Main memory is a large array of words or bytes where each word or byte has its
own address.
Main memory provides a fast storage that can be accessed directly by the CPU. For
a program to be executed, it must in the main memory. An Operating System does
the following activities for memory management −
 Keeps tracks of primary memory, i.e., what part of it are in use by whom,
what part are not in use.
 In multiprogramming, the OS decides which process will get memory when
and how much.
 Allocates the memory when a process requests it to do so.

Processor Management
In multiprogramming environment, the OS decides which process gets the
processor when and for how much time. This function is called process
scheduling. An Operating System does the following activities for processor
management −
 Keeps tracks of processor and status of process. The program responsible
for this task is known as traffic controller.
 Allocates the processor (CPU) to a process.
 De-allocates processor when a process is no longer required.

Device Management
An Operating System manages device communication via their respective drivers. It
does the following activities for device management −
 Keeps tracks of all devices. Program responsible for this task is known as the
I/O controller.
 Decides which process gets the device when and for how much time.
 Allocates the device in the efficient way.
 De-allocates devices.

File Management
A file system is normally organized into directories for easy navigation and usage.
These directories may contain files and other directions.
An Operating System does the following activities for file management −
 Keeps track of information, location, uses, status etc. The collective facilities
are often known as file system.
 Decides who gets the resources.
 Allocates the resources.
 De-allocates the resources.

1.2 Evolution of Operating Systems


The evolution of operating systems is directly dependent on the development of
computer systems and how users use them. Here is a quick tour of computing
systems through the past fifty years in the timeline.

Early Evolution
 1945: ENIAC, Moore School of Engineering, University of Pennsylvania.
 1949: EDSAC and EDVAC
 1949: BINAC - a successor to the ENIAC
 1951: UNIVAC by Remington
 1952: IBM 701
 1956: The interrupt
 1954-1957: FORTRAN was developed

1.3 Operating System services


I. One set of operating-system services provides functions that are helpful to
the user

Communications – Processes may exchange information, on the same computer or


between computers over a network.

II. Communications may be via shared memory or through message passing


(packets moved by the OS)
III. Error detection – OS needs to be constantly aware of possible errors
may occur in the CPU and memory hardware, in I/O devices, in user
program
IV. For each type of error, OS should take the appropriate action to
ensure correct and consistent computing.
V. Debugging facilities can greatly enhance the user’s and programmer’s
abilities to efficiently use the system.
VI. Another set of OS functions exists for ensuring the efficient operation of
the system itself via resource sharing
VII. Resource allocation - When multiple users or multiple jobs
running concurrently, resources must be allocated to each of them
VIII. Many types of resources - Some (such as CPU cycles, main memory, and file
storage) may have special allocation code, others (such as I/O devices) may
have general request and release code
IX. Accounting - To keep track of which users use how much and what kinds
of computer resources
I. Protection and security - The owners of information stored in a multiuser or
networked computer system may want to control use of that information,
concurrent processes should not interfere with each other. Protection
involves ensuring that all access to system resources is controlled.
II. Security of the system from outsiders requires user authentication, extends
to defending external I/O devices from invalid access attempts.

1.4 OS Components
1.4.1 File Management
A file is a set of related information which is should define by its creator. It commonly
represents programs, both source and object forms, and data. Data files can be
numeric, alphabetic, or alphanumeric.

1.4.2 Process Management


The process management component is a procedure for managing the many
processes that are running simultaneously on the operating system. Every software
application program has one or more processes associated with them when they are
running.

1.4.3 I/O Device Management


One of the important use of an operating system that helps you to hide the variations
of specific hardware devices from the user.

1.4.4 Network Management


Network management is the process of administering and managing
computer networks. It includes performance management, fault analysis,
provisioning of networks, and maintaining the quality of service

1.4.5 Secondary-Storage Management


The most important task of a computer system is to execute programs. These
programs, along with the data, helps you to access, which is in the main memory
during execution.

1.4.6 Security Management


The various processes in an operating system need to be secured from each other's
activities. For that purpose, various mechanisms can be used to ensure that those
processes which want to operate files, memory CPU, and other hardware resources
should have proper authorization from the operating system.

1.5 . Operating Systems Types

1.5.1 Batch operating system


The users of a batch operating system do not interact with the computer directly.
Each user prepares his job on an off-line device like punch cards and submits it to
the computer operator. To speed up processing, jobs with similar needs are
batched together and run as a group. The programmers leave their programs with
the operator and the operator then sorts the programs with similar requirements into
batches.
1.5.2 Time-sharing operating systems
Time-sharing is a technique which enables many people, located at various
terminals, to use a particular computer system at the same time. Time-sharing or
multitasking is a logical extension of multiprogramming. Processor's time which is
shared among multiple users simultaneously is termed as time-sharing.
The main difference between Multiprogrammed Batch Systems and Time-Sharing
Systems is that in case of Multiprogrammed batch systems, the objective is to
maximize processor use, whereas in Time-Sharing Systems, the objective is to
minimize response time.
Multiple jobs are executed by the CPU by switching between them, but the switches
occur so frequently. Thus, the user can receive an immediate response. For
example, in a transaction processing, the processor executes each user program in
a short burst or quantum of computation. That is, if n users are present, then each
user can get a time quantum. When the user submits the command, the response
time is in few seconds at most.
The operating system uses CPU scheduling and multiprogramming to provide each
user with a small portion of a time. Computer systems that were designed primarily
as batch systems have been modified to time-sharing systems.
Advantages of Timesharing operating systems are as follows −

 Provides the advantage of quick response.


 Avoids duplication of software.
 Reduces CPU idle time.
Disadvantages of Time-sharing operating systems are as follows −

 Problem of reliability.
 ssvQuestion of security and integrity of user programs and data.
 Problem of data communication.

1.5.3 Multiprogramming
Sharing the processor, when two or more programs reside in memory at the same
time, is referred as multiprogramming. Multiprogramming assumes a single
shared processor. Multiprogramming increases CPU utilization by organizing jobs
so that the CPU always has one to execute.
The following figure shows the memory layout for a multiprogramming system
An OS does the following activities related to multiprogramming.
 The operating system keeps several jobs in memory at a time.
 This set of jobs is a subset of the jobs kept in the job pool.
 The operating system picks and begins to execute one of the jobs in the
memory.
 Multiprogramming operating systems monitor the state of all active programs
and system resources using memory management programs to ensures that
the CPU is never idle, unless there are no jobs to process.
Advantages

 High and efficient CPU utilization.


 User feels that many programs are allotted CPU almost simultaneously.
Disadvantages

 CPU scheduling is required.


 To accommodate many jobs in memory, memory management is required.

1.5.4 Multitasking
Multitasking is when multiple jobs are executed by the CPU simultaneously by
switching between them. Switches occur so frequently that the users may interact
with each program while it is running. An OS does the following activities related to
multitasking −
 The user gives instructions to the operating system or to a program directly,
and receives an immediate response.
 The OS handles multitasking in the way that it can handle multiple
operations/executes multiple programs at a time.
 Multitasking Operating Systems are also known as Time-sharing systems.
 Each user has at least one separate program in memory.

 A program that is loaded into memory and is executing is commonly


referred to as a process.

1.5.5 Distributed operating System


Distributed systems use multiple central processors to serve multiple real-time
applications and multiple users. Data processing jobs are distributed among the
processors accordingly.
The processors communicate with one another through various communication
lines (such as high-speed buses or telephone lines). These are referred as loosely
coupled systems or distributed systems. Processors in a distributed system may
vary in size and function. These processors are referred as sites, nodes, computers,
and so on.
The advantages of distributed systems are as follows −

 With resource sharing facility, a user at one site may be able to use the
resources available at another.
 Speedup the exchange of data with one another via electronic mail.
 If one site fails in a distributed system, the remaining sites can
potentially continue operating.
 Better service to the customers.
 Reduction of the load on the host computer.
 Reduction of delays in data processing.
1.5.6 Real Time System
Real-time systems are usually dedicated, embedded systems. An operating system
does the following activities related to real-time system activity.

 In such systems, Operating Systems typically read from and react to sensor
data.
 The Operating system must guarantee response to events within fixed
periods of time to ensure correct performance.

1.5.7 Network operating System


A Network Operating System runs on a server and provides the server the
capability to manage data, users, groups, security, applications, and other
networking functions. The primary purpose of the network operating system is to
allow shared file and printer access among multiple computers in a network,
typically a local area network (LAN), a private network or to other networks.
Examples of network operating systems include Microsoft Windows Server 2003,
Microsoft Windows Server 2008, UNIX, Linux, Mac OS X, Novell NetWare, and
BSD.
The advantages of network operating systems are as follows −

 Centralized servers are highly stable.


 Security is server managed.
 Upgrades to new technologies and hardware can be easily integrated into
the system.
 Remote access to servers is possible from different locations and types of
systems.
The disadvantages of network operating systems are as follows −

 High cost of buying and running a server.


 Dependency on a central location for most operations.
 Regular maintenance and updates are required.
2.1 Process concept

CPU Scheduling
A process is basically a program in execution. The execution of a process must
progress in a sequential fashion.
When a program is loaded into the memory and it becomes a process, it can be
divided into four sections ─ stack, heap, text and data. The following image shows a
simplified layout of a process inside main memory.

S.N. Component & Description

1
Stack
The process Stack contains the temporary data such as method/function
parameters, return address and local variables.

2
Heap
This is dynamically allocated memory to a process during its run time.

3
Text
This includes the current activity represented by the value of Program Counter
and the contents of the processor's registers.

4
Data
This section contains the global and static variables.

2.2 Process state transitions


Processes in the operating system can be in any of the following states:

 NEW- The process is being created.


 READY- The process is waiting to be assigned to a processor.
 RUNNING- Instructions are being executed.
 WAITING- The process is waiting for some event to occur (such as an
I/O completion or reception of a signal).
 TERMINATED- The process has finished execution.
2.3 Schedulers
A process migrates among the various scheduling
queues throughout its lifetime. The operating system
must select, for scheduling purposes, processes from
these queues in some fashion.

2.3.1 Long Term Scheduler


Long term scheduler is also known as a job scheduler. This scheduler regulates the
program and select process from the queue and loads them into memory for
execution. It also regulates the degree of multi-programing.

However, the main goal of this type of scheduler is to offer a balanced mix of jobs,
like Processor, I/O jobs., that allows managing multiprogramming.

2.3.2 Medium Term Scheduler


Medium-term scheduling is an important part of swapping. It enables you to handle
the swapped out-processes. In this scheduler, a running process can become
suspended, which makes an I/O request.
A running process can become suspended if it makes an I/O request. A suspended
process can't make any progress towards completion. In order to remove the
process from memory and make space for other processes, the suspended process
should be moved to secondary storage.

2.3.3 Short Term Scheduler


Short term scheduling is also known as CPU scheduler. The main goal of this
scheduler is to boost the system performance according to set criteria. This helps
you to select from a group of processes that are ready to execute and allocates CPU
to one of them. The dispatcher gives control of the CPU to the process selected by
the short term scheduler.

2.4 Scheduling algorithms

A Process Scheduler schedules different processes to be assigned to the CPU


based on particular scheduling algorithms. There are five popular process
scheduling algorithms which we are going to discuss in this chapter −

 First-Come, First-Served (FCFS) Scheduling


 Shortest-Job-Next (SJN) Scheduling
 Priority Scheduling
 Shortest Remaining Time
 Round Robin(RR) Scheduling

2.4.1 First Come First Serve (FCFS)


 Jobs are executed on first come, first serve basis.
 It is a non-preemptive, pre-emptive scheduling algorithm.
 Easy to understand and implement.
 Its implementation is based on FIFO queue.
 Poor in performance as average wait time is high.
Wait time of each process is as follows −

Process Wait Time : Service Time - Arrival Time

P0 0-0=0

P1 5-1=4

P2 8-2=6

P3 16 - 3 = 13

Average Wait Time: (0+4+6+13) / 4 = 5.75

2.4.2 Shortest Job Next (SJN)


 This is also known as shortest job first, or SJF
 This is a non-preemptive, pre-emptive scheduling algorithm.
 Best approach to minimize waiting time.
 Easy to implement in Batch systems where required CPU time is known
in advance.
 Impossible to implement in interactive systems where required CPU time is
not known.
 The processer should know in advance how much time process will take.

Given: Table of processes, and their Arrival time, Execution time


Process Arrival Time Execution Time Service Time

P0 0 5 0

P1 1 3 5

P2 2 8 14

P3 3 6 8

Waiting time of each process is as follows −

Process Waiting Time

P0 0-0=0

P1 5-1=4

P2 14 - 2 = 12

P3 8-3=5

Average Wait Time: (0 + 4 + 12 + 5)/4 = 21 / 4 = 5.25

2.4.3 Priority Based Scheduling


 Priority scheduling is a non-preemptive algorithm and one of the most
common scheduling algorithms in batch systems.
 Each process is assigned a priority. Process with highest priority is to be
executed first and so on.
 Processes with same priority are executed on first come first served basis.
 Priority can be decided based on memory requirements, time requirements
or any other resource requirement.

Given: Table of processes, and their Arrival time, Execution time, and priority. Here
we are considering 1 is the lowest priority.

Process Arrival Time Execution Time Priority Service Time


P0 0 5 1 0

P1 1 3 2 11

P2 2 8 1 14

P3 3 6 3 5

Waiting time of each process is as follows −

Process Waiting Time

P0 0-0=0

P1 11 - 1 = 10

P2 14 - 2 = 12

P3 5-3=2

Average Wait Time: (0 + 10 + 12 + 2)/4 = 24 / 4 = 6

2.4.4 Shortest Remaining Time


 Shortest remaining time (SRT) is the preemptive version of the SJN algorithm.
 The processor is allocated to the job closest to completion but it can
be preempted by a newer ready job with shorter time to completion.
 Impossible to implement in interactive systems where required CPU time is
not known.
 It is often used in batch environments where short jobs need to
give preference.

2.4.5 Round Robin Scheduling


 Round Robin is the preemptive process scheduling algorithm.
 Each process is provided a fix time to execute, it is called a quantum.
 Once a process is executed for a given time period, it is preempted and
other process executes for a given time period.
 Context switching is used to save states of preempted processes.
Wait time of each process is as follows −

Process Wait Time : Service Time - Arrival Time

P0 (0 - 0) + (12 - 3) = 9

P1 (3 - 1) = 2

P2 (6 - 2) + (14 - 9) + (20 - 17) = 12

P3 (9 - 3) + (17 - 12) = 11

Average Wait Time: (9+2+12+11) / 4 = 8.5


Page Replacement Algorithm
A page replacement algorithm looks at the limited information about accessing the
pages provided by hardware, and tries to select which pages should be replaced to
minimize the total number of page misses, while balancing it with the costs of
primary storage and processor time of the algorithm itself. There are many different
page replacement algorithms.

First In First Out (FIFO) algorithm


 Oldest page in main memory is the one which will be selected for replacement.
 Easy to implement, keep a list, replace pages from the tail and add new
pages at the head.

Optimal Page algorithm


 An optimal page-replacement algorithm has the lowest page-fault rate of all
algorithms. An optimal page-replacement algorithm exists, and has been called
OPT or MIN.
 Replace the page that will not be used for the longest period of time. Use the
time when a page is to be used.
Least Recently Used (LRU) algorithm
 Page which has not been used for the longest time in main memory is the
one which will be selected for replacement.
 Easy to implement, keep a list, replace pages by looking back into time.

Page Buffering algorithm


 To get a process start quickly, keep a pool of free frames.
 On page fault, select a page to be replaced.
 Write the new page in the frame of free pool, mark the page table and
restart the process.
 Now write the dirty page out of disk and place the frame holding
replaced page in free pool.

Least frequently Used(LFU) algorithm


 The page with the smallest count is the one which will be selected for
replacement.
 This algorithm suffers from the situation in which a page is used heavily
during the initial phase of a process, but then is never used again.

You might also like