Operating System Concepts & Computer Fundamentals: Sachin G. Pawar Sunbeam, Pune
Operating System Concepts & Computer Fundamentals: Sachin G. Pawar Sunbeam, Pune
&
COMPUTER FUNDAMENTALS
SACHIN G. PAWAR
SUNBEAM, PUNE.
Introduction To OS
What is Operating System:
An OS is a program that manages the computer
hardware.
It also provides a basis for application programs and
acts as an intermediary between the computer user and
the computer hardware.
OS is a resource allocator: Resources like, CPU time,
memory space, file storage space, I/O devices and so on.
OS is the control program: manages the execution of
user programs to prevent errors and improper use of
computer.
Introduction To OS
The One program running all the time on
computers is the Kernel, everything else is
either as system program (ships with the OS)
or an application programs.
Buffers
3. Main memory:
Instructions, Data
4. System Bus
Introduction
When data are moved over longer distances, to or from
a remote device, the process is known as
communications.
Bus: A bus is communication pathway connecting two
or more devices. A key characteristic of bus is that it is
shared transmission medium.
Multiple devices connects to bus, and a signal
transmitted by any one device is available for reception
by all other devices attached to the bus.
Typically bus consists of multiple communication
pathways or lines, each line is capable of transmitting
signals representing binary 1 and binary 0.
Introduction
System Bus: A bus that connects major computer
components (processor, memory, I/O) is called system
bus.
There are many different bus designs, on any bus the lines
5. Magnetic disk
cell.
DRAM and SRAM
available by an OS.
These calls are generally available as routines written in C, C+
control block.
It contains many pieces of information associated with the
Switching Context
Switching to user mode
Jumping to the proper location in the user program
to restart the program.
Dispatch Latency:
The time it takes for the dispatcher to stop one process
and start another running is known as dispatch latency.
PROCESS MANAGEMENT
Scheduling Criteria:
1. CPU Utilization: MAX :We want to keep CPU as busy as
possible.
2. Throughput: MAX: One measure of work is the number of
processes that are completed per time unit, is called
throughput.
3. Turnaround time: MIN: The interval from the time of
submission of a process to the time of completion is the
turnaround time. So the it is the sum of the periods spent
waiting to get into memory, waiting in the ready queue,
executing on the CPU, and doing I/O.
4. Waiting time: MIN: it is the sum of the periods spent waiting
in the ready queue.
5. Response time: MIN: it is the time from the submission of a
request until the first response is produced.
PROCESS MANAGEMENT
Scheduling Algorithms:
1. FCFS: First-Come, First-Served Scheduling:
Process arrived first will be scheduled first.
For implementation FIFO queue can be used.
FCFS algorithm is non preemptive.
e.g. Process-CPU Burst Time: P1-24, P2-3, P3-3.
2. SJF: Shortest-Job-First Scheduling:
Process having minimum burst time will be executed first
This algorithm gives minimum waiting time
e.g. Process-CPU Burst Time: P1-6, P2-8, P3-7, P4-3.
Preemptive SJF will preempt the currently executing process.
Also refereed as shortest-remaining-time-first scheduling
algorithms.
e..g Process-Arrival Time-CPU Burst Time
P1-0-8, P2-1-4, P3-2-9, P4-3-5.
PROCESS MANAGEMENT
3. Priority Scheduling
e.g. Process-Burst Time-Priority
P1-10-3, P2-1-1, P3-2-4, P4-1-5, P5-5-2.
Indefinite blocking/starvation.
Aging
4. Round-Robin Scheduling
This algorithm gives minimum response time
e.g. Process-CPU Burst Time.
P1-24, P2-3, P3-3.
time quantum - 4 ms( small CPU time)
PROCESS MANAGEMENT
5. Multilevel Queue Scheduling:
Depending on the nature of the processes, the
ready queue is splited into the multiple sub-
queues and each queue can have different
scheduling algorithm.
6. Multilevel Feedback Queue Scheduling:
If a process is not getting sufficient CPU time
in its current queue, then it can be shifted into
another queue by OS. This enhanced concept
is known as Multilevel feedback queue.
PROCESS MANAGEMENT
Multi-Processor Scheduling:
1. Asymmetric Multi-Processing:
The single processor handles all scheduling decisions, I/O
processing, and other system activities called master server.
The other processors execute only user code.
In this type only one processor accesses the system data
structures, reducing the need for data sharing.
2. Symmetric Multi-Processing:
Where each processor is self scheduling.
All processes may be in common ready queue, or each
processor may have its own private queue of ready processes.
e.g. Windows XP, Windows 2000, Solaris, Linux and Mac OS
X supports SMP.
PROCESS MANAGEMENT
INTER PROCESS COMMUNICATION
Independent Process:
A process is independent if it cannot affect or be affected
process is independent.
Cooperating Process:
A process is cooperating if it can affect or be affected by
cooperating process.
PROCESS MANAGEMENT
Cooperating processes requires an inter process
communication (IPC) mechanism that will allow
them to exchange data and information.
There are two fundamental models of IPC:
1. Shared Memory Model: In this model, a region of
memory that is shared by cooperating processes is
established. Processes then can exchange information
by reading and writing data to the shared region.
2. Message Passing Model: In this model,
communication takes place by means of messages
exchanged between the cooperating processes.
PROCESS MANAGEMENT
Pipe:
Pipe is used to communicate between two processes.
It is a stream based unidirectional communication.
There are two types of pipe:
1. Unnamed Pipe: Used to communicate between related
processes.
2. Named Pipe/FIFO: Used to communicate between
unrelated processes.
Message Queue:
Used to transfer packets of data from one process to
other process.
It is bidirectional IPC mechanism.
PROCESS MANAGEMENT
Signals:
A process can send signal to another process or OS can
send signal to a process. E.g. SIGKILL, SIGSEGV
Sockets:
Socket is defined as communication endpoint
Using socket one process can communicate with another
process on same machine or different machine in the
network.
Socket = IP Address + Port
RPC (Remote Procedure Call)
Used call method from another process on the same
machine or different machine in the network.
PROCESS MANAGEMENT
Process Coordination/Process Synchronization:
Cooperating processes can either directly share a
data inconsistency.
The mechanisms to ensure the orderly execution of
2. Hold and Wait: process hold a resource and wait for another
resource.
3. No preemption: A resource should not be released until task is
completed.
4. Circular wait
Resource-Allocation Graph:
Deadlocks can be described more precisely in terms of directed
1. Resource-Allocation-Graph Algorithm
2. Banker’s Algorithm
3. Recovery from the deadlock:
1. Process termination
2. Resource Preemption
MEMORY MANAGEMENT
An address generated by the CPU is commonly
referred to as a logical address, whereas an
address seen by the memory unit - that is, one
loaded into the memory address register of the
memory- is commonly referred to as a physical
address.
The runtime mapping from virtual/logical to
physical memory addresses is done by a
hardware device called Memory Management
Unit (MMU).
MEMORY MANAGEMENT
SWAPPING:
A process must be in memory to be executed. A process,
Both the first fit and best fit strategies for memory allocation suffer
from external fragmentation. As processes are loaded and removed
from memory, the free memory space is broken into little pieces.
External fragmentation exists when there is enough total memory
space to satisfy a request but the available spaces are not contiguous;
storage is fragmented into large number of small holes.
MEMORY MANAGEMENT
Internal Fragmentation:
Unused memory that is internal to a partition
in used:
1. Paging
2. Segmentation
MEMORY MANAGEMENT
Paging:
Paging is the memory-management scheme
program.
label, no. of data blocks, no. of free data blocks and info
about free data blocks. Information about other blocks.
STORAGE MANAGEMENT
3. i-node List/Mater File Table:
The information/metadata of a file is stored in
“i-node list”.
4. Data Block:
The data or the contents of file are stored in
data block.
FAT: File Allocation Table: e.g. Win 95
STORAGE MANAGEMENT
Disk Allocation Mechanisms:
Each file system allocates data blocks to the file
in different ways:
1. Contiguous Allocation:
No. of blocks required for the file are allocated
contiguously.
inode of the file contains starting block address and
number of data blocks.
Faster sequential random access
STORAGE MANAGEMENT
2. Linked Allocation:
Each data block of the file contains data/contents
and address of next data block of that file.
Inode contains address of starting and ending data
block.
No external fragmentation, faster sequential access
Slower random access
E.g. FAT
STORAGE MANAGEMENT
3. Indexed Allocation:
A special data block contains addresses of data block
of the file. This block is called as “index block”
The address of the index block is stored in the inode
of the file
No external fragmentation, faster random and
sequential access.
File can not grow beyond certain limit
STORAGE MANAGEMENT
Disk Scheduling:
HDD Structure:
Access Time: Time required to perform
read/write operation on a particular sector of the
disk, is called as “disk access time”.
Disk access time includes two components = seek
time + rotational latency
Seek Time: is the time required to move head to
desired cylinder(track)
Rotational Latency: is the time required to rotate the
platters so that desired sector is reached to the head.
STORAGE MANAGEMENT
Disk Scheduling Algorithms:
When number of requests are pending for accessing disk
cylinders, magnetic head is moved using certain
algorithms, they are called as disk scheduling
algorithms.
1. FCFS:
requests are handled in the order in which they arrives
2. SSTF (Shortest Seek Time First ):
requests of nearest(to current position of magnetic head)
cylinder is handled first
3. SCAN or Elevator:
Magnetic head keep moving from 0 to max cylinder and
in reverse order continuously serving cylinder requests.
STORAGE MANAGEMENT
4. C-SCAN:
Magnetic head keep moving from 0 to max
cylinder serving the requests and then jump
back to 0 directly.
5. LOOK:
Implementation policy of SCAN or C-SCAN
If no requests pending magnetic head is stopped.