0% found this document useful (0 votes)
372 views58 pages

Os PDF

The document discusses the key concepts of operating systems including types of operating systems such as batch processing systems, multiprogramming systems, time-sharing systems, parallel systems, distributed systems, and real-time systems. It also covers operating system concepts like process management, memory management, file management, spooling, and operating system services. The six main types of operating systems are described in terms of their characteristics, advantages, and disadvantages.

Uploaded by

manchi saijeevan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
372 views58 pages

Os PDF

The document discusses the key concepts of operating systems including types of operating systems such as batch processing systems, multiprogramming systems, time-sharing systems, parallel systems, distributed systems, and real-time systems. It also covers operating system concepts like process management, memory management, file management, spooling, and operating system services. The six main types of operating systems are described in terms of their characteristics, advantages, and disadvantages.

Uploaded by

manchi saijeevan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 58

■■■■■■■■■■■■■■■■■■■■■■■■■■■■( UNIT 1)■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■

xxxxxxxxxxx( Introduction & Process Management )xxxxxxxxx

Operating system: (2Marks-2015)(2Marks-2014)


An operating system is software that provides an interface between the hardware and the user. The
operating system is responsible for:

♥ management of processes

♥ allocation and sharing of hardware such as RAM and disk space

♥ Acts as a host for computing applications running on the operating systems.

Examples: MS Windows, Windows NT, UNIX, LINUX

Types of Operating System: (2M-2015)(15M-2011)


 Batch processing System

 Multiprogramming System

 Time sharing System

 Parallel System

 Distributed System

 Real time System

1) Batch processing System:


 Batch processing is a technique in which Operating System collects programs and data together in a
batch (group) before processing starts.

 The jobs was submitted to the computer in form of punch codes

 Jobs are storing in a queue until the computer is ready to deal with them

 Jobs are processed in the order of first come first served fashion.

 OS keeps a number of jobs in memory and executes them without any manual information.

 When job completes its execution, its memory is released and the output for the job gets copied
into an output spool for later printing or processing.
Advantages of Batch systems:

 Scheduling is very simple.

 It does not require any critical device management.

 Provides simple forms of file management.

Disadvantages of Batch systems:

x Turn-around time can be large from user’s point

x A job could enter an infinite loop.

x Because of slow I/O devices, the CPU is often idle.

x Result of each job is produced only at the end of a batch

2) Multi Programming System: (5M-2014) (2M-2012)(6M-2012)


♥ Multiprogramming system is when two or more programs are residing in memory at the same
time, and sharing the processor is referred to the multiprogramming.

♥ The operating system switches jobs in and out of the processor according to priority

♥ The processor is so fast that it seems that many jobs are being processed at the same time

♥ The operating system picks and executes one of the jobs in the memory. The job in execution
may have to wait for some task, such as an I/O operation to complete

Multi-programmed system
Advantages of Multiprogramming System:

 Multiple jobs can be run concurrently.

 Increased CPU and I/O utilization.

 Increase the throughput rate.

 Increase the efficiency of the system.

Disadvantages of Multiprogramming System:

x CPU scheduling is required

x The user cannot interact with the job when it is executing

x Programmers cannot modify the program as it executes

x To accommodate many jobs in memory, memory management is required.

3) TIME-SHARING SYSTEM: (2M-2014)(4M-2015)(2M-2010)


♥ Time sharing or multi-tasking it is a logical extension of multiprogramming.

♥ Time sharing system were developed t provides interactive use of computer system

♥ A time-share operating system allows many users to share the computer resource
simultaneously.

♥ The CPU executes multiple jobs by switching among themselves, but the switches occur so
frequently that the users can interact with each program while it is running

♥ In a time sharing system, each process executes only for a short-time for each user

♥ Similar to multiprogramming, even in time-sharing systems, several jobs are kept simultaneously
in memory
Advantages of Time-Sharing System.

 Provide advantage of quick response.

 Avoids duplication of software.

 Reduces CPU idle time.

 Allows many users to share the computer simultaneously.

 Users can interact with the job when it is executing.

Disadvantages of Time-Sharing System.

x Problem of reliability.

x Question of security and integrity of user programs and data.

x Problem of data communication/ inter-process communication is complicated.

4) Parallel System:
 Multiprocessor systems.

 A single program is processed by two or more CPUS.

 To execute using multiple CPUs, a problem is broken into discrete parts that can be solved
concurrently.

 They are called tightly coupled systems due to high degree of resource sharing.

Advantages of Parallel systems:

 Increased throughput as by increasing the number of processors, hope to get more work done in less
time

 Increase reliability as functions are distributed properly among several processors, then the failure
of one processor will not halt the system.

Disadvantages of Parallel Systems:

x Implementation is complex

x Requires resources management and protection

5) Distributed Systems: (8M-2012)


 A distributed system is a collection of physically separate, possibly heterogeneous computer
systems that are networked to provide the user with the access to the various resources that the
system maintains.

 Access to a shared resource increases computation speed, functionality, data availability and
reliability

 Each processor communicates with one another through various communication lines such as
communication lines. These systems are referred to as loosely coupled systems or distributed
system.
 Distributed systems also consist of multiple computers but differ from networked systems in that
the multiple computers are transparent to the user. Often there are redundant resources and a
sharing of the workload among the different computers.

Advantages of Distributed Systems:

 Resource sharing provides mechanisms for sharing files at remote sites.

 Reliability-if one site fails in a distributed system, then the remaining sites can continue operating.

 Communication can be enabled by passing messages between processes running on different


computers.

 Computation speedup as computations can be partitioned into sub-computations which can be run
concurrently on various sites in a distributed system.

Disadvantages of Distributed Systems:

x Implementation is complex.

x Require memory and resource management and protection.

6) REAL-TIME OPERATING SYSTEM: (4M-2015)


 A real time OS is used when rigid time requirements have been placed on the operation of a
processor or the flow of data.

 Used as control device in a dedicated application.

 Systems used to control machinery, scientific instruments and industrial systems are real time
systems.

 A real time system has well-defined, fixed time constraints. Processing must be done within the
defined constraints, or the system will fail.

 Guided missile systems and medical monitoring equipment are examples of real time operating
systems.

There are two types of real-time OS

 Hard real-time systems

 Soft real-time systems

Hard real time systems: guarantee that critical tasks will complete on time. Safety-critical systems
are typically hard real-time systems.

Soft real-time systems: are less restrictive, simply providing that a critical real time tasks get
priority over the others and retains that priority until it completes.
Advantages of Real Time Systems:

 Multitasking operation is accomplished by scheduling processes for execution independently of each


other.

 Memory management in real time systems is comparatively less demanding than in other types of
operating systems.

 File management in real-time systems is usually increases the speed of access.

Disadvantages of Real Time Systems:

x There is a time limit is allocated to each event.

x Implementation is too costly

SPOOLING(Simultaneous Peripheral Operations OnLine): (5M-2015)


 Spooling is a process of placing data in temporary working area for another program to process
example (print spooling and mail spools.

 With spooling all process can access the resource without waiting

 The spool is processed in ascending order, working on the basis of a FIFO (first in, first out)
algorithm.

 For example, in printer spooling, the documents/files that are sent to the printer are first stored in
the memory or printer spooler. Once the printer is ready, it fetches the data from that spool and
prints it.

Advantages of Spooling:

 It overlaps the I/O operation of one job with the computation of other jobs. For example, while
reading the input of one job, the spooler may be printing the output of a different job.

 The spooling operation uses a disk as a very large buffer.

 It increases the performance of the system


File Management: (4M-2014)
 A file is a collection of related information defined by its creator. Commonly, files represent
programs and data.

 The operating system is responsible for the following activities in connections with file
management:

– File creation and deletion.

– Directory creation and deletion.

– Support of primitives for manipulating files and directories.

– Mapping files onto secondary storage.

– File backup on stable storage media.

Operating System Services: (7M-2015) (7M-2012)(5M-2010) (P-I-F-C-E-U)

♥ Program execution – system capability to load a program into memory and to run it.

♥ I/O operations – since user programs cannot execute I/O operations directly, the operating
system must provide some means to perform I/O.

♥ File-system manipulation – program capability to read, write, create, and delete files.

♥ Communications – exchange of information between processes executing either on the same


computer or on different systems tied together by a network.

♥ Error detection – ensure correct computing by detecting errors in the CPU and memory hardware,
in I/O devices, or in user programs.

♥ User Interface- with the help of which the user submits his programs.

Operating System Functions: (5M-2010) (M-P-D-R-A-P)

Additional functions exist not for helping the user, but rather for ensuring efficient system operations.

♥ Memory management -The o/s keeps track of the memory, what parts are use and by whom.

♥ Process management - The o/s keeps track of processors and the status of processes. It decides
who will have a chance to use the processor.

♥ Device management -The o/s keeps track of the devices, channels, control units and decides what
is an efficient way to allocate the device

♥ Resource allocation – allocating resources to multiple users or multiple jobs running at the same
time.
♥ Accounting – keep track of and record which users use how much and what kinds of computer
resources for account billing or for accumulating usage statistics.

♥ Protection & Security – ensuring that all access to system resources is controlled. Security means
to ensure that unauthorized access is restricted.
System Calls: (5M-2012)
♥ System call act as an interface between a process and the operating system

♥ Operating system provide the services to the user by using system call

♥ They are generally available as assembly languages

Types of System Calls:


1) Process Control: 2) File Management/Manipulations:
 create process, terminate process  Create file, delete file
 get process attributes, set process attribute  Get file attributes, set file attributes.
3) Device Management: 3) Device Management:
 Read, write, reposition.  Read, write, reposition.
 Get device attributes, set device attributes.  Get device attributes, set device attributes.
5) Communication:
 Create, delete communication connection.
 Send, receive messages.

Virtual Machines: (2M-2011)


 A virtual machine (VM) shares physical hardware resources (CPU, memory, disk drives) with other
users but isolates the operating system or application to avoid changing the end-user experience.
 Virtual machines more efficiently use hardware, which lowers the quantities of hardware and
associated maintenance costs, and reduces power and cooling demand. They also ease management
because virtual hardware does not fail.

System Models:
Process State: (5M-2014)(6M-2015)
As a process executes, it changes state. The state of a process is defined as the current activity of
the process.

Process can have one of the following five states at a time.

1. New: The process is being created.

2. Ready: The process is waiting to be assigned to a processer.

3. Running: the process is being currently executed in cpu

4. Waiting: The process is waiting for some event to occur(wait for user input or file to open)

5. Terminated: The process has finished execution.

PROCESS CONTROL BLOCK: (5M-2015)


Process Control Block (process control block (PCB)-also called a task control block) is a data structure
in the operating system kernel containing the information needed to manage a particular process.
Process State Information: (P-P-C-I-A-C-M)

♥ Process State.: The state may be new, ready, running, Waiting or terminated.

♥ Program Counter.: The counter indicates the address of the next instruction to be executed
for this process.

♥ CPU Registers.: Whenever a processor switches over from one process to another, information
about current status of the old process is saved in the register

♥ I/O Status Info.: The information includes the list of I/O devices allocated to this process, a
list of open files and so on.

♥ Accounting Information: amount of CPU and real time used, time limits, job or process numbers
and so on.

♥ CPU-Scheduling information: This information includes a process priority, pointers to


scheduling queues, and any other scheduling parameters.

♥ Memory limits: the value of the base and limit registers the page table and memory details

Schedulers: (9M-2011)

Schedulers are special system software which handles process scheduling in various ways. Their main
task is to select the jobs to be submitted into the system and to decide which process to run.
Schedulers are of three types

 Long Term Scheduler

 Short Term Scheduler

 Medium Term Scheduler

LONG TERM SCHEDULER:


 It is also called job scheduler.

 Long term scheduler determines which programs are admitted to the system for processing.

 Job scheduler selects processes from the queue and loads them into memory for execution.

 Process loads into the memory for CPU scheduling.

 The primary objective of the job scheduler is to provide a balanced mix of jobs, such as I/O
bound and processor bound.

 It also controls the degree of multiprogramming. If the degree of multiprogramming is stable,


then the average rate of process creation must be equal to the average departure rate of
processes leaving the system.

 On some systems, the long term scheduler may not be available or minimal. Time-sharing
operating systems have no long term scheduler.

 When process changes the state from new to ready, then there is use of long term scheduler.
SHORT-TERM SCHEDULER:
 It is also called CPU scheduler.

 Main objective is increasing system performance in accordance with the chosen set of criteria.

 It is the change of ready state to running state of the process.

 CPU scheduler selects process among the processes that are ready to execute and allocates CPU
to one of them.
 Short term scheduler also known as dispatcher, execute most frequently and makes the fine
grained decision of which process to execute next.

 Short term scheduler is faster than long term scheduler.

MEDIUM-TERM SCHEDULER:
 A running process may be suspended because of I/O request. Such a suspended process is then
removed from main memory and stored in secondary memory. This process is called swapping.

 Medium term scheduling is part of the swapping. It removes the processes from the memory.

 The medium term scheduler is in-charge of handling the swapped out-processes.

 This is done because there is a limit on the number of active processes that can reside in main
memory.
 Therefore, a suspended process is swapped-out from main memory.

 At some later time, the process can be swapped-in into the main memory.

 All versions of Windows use swapping.

 It reduces the degree of multiprogramming.

No Long Term Scheduler Short Term Scheduler Medium Term Scheduler

1 It is a job scheduler It is a CPU scheduler It is a process swapping scheduler.

Speed is lesser than short Speed is fastest among Speed is in between both short
2
term scheduler other two and long term scheduler.

It provides lesser control


It controls the degree of It reduces the degree of
3 over degree of
multiprogramming multiprogramming.
multiprogramming

It is almost absent or minimal It is also minimal in time It is a part of Time sharing


4
in time sharing system sharing system systems.

It selects processes from pool It selects those processes It can re-introduce the process
5 and loads them into memory which are ready to into memory and execution can be
for execution execute continued.
Context Switch: (5M-2012)
 When one process is running on a CPU and another needs to run on the same CPU, there is
need to switch between the processes. This is called a context switch (or a “state” save
and “state” restore).

 The context is represented in the PCB of the process. It includes the value of CPU registers,
the process state, and memory management information.

 Switching the CPU to another process requires performing a state save of the current process
and a state restore of a different process.

 When a context switch occurs, the kernel saves the context of the old process in its PCB and
load the save context of the new process scheduled to run. Context-switch time is pure overhead
as the system does no useful work while switching.

 Speed of context switch varies from system to system, depending on memory speed, number of
registers that must be copied, and the existence of special instructions.

CPU Scheduler:
Selects the process from the processes in memory that are ready to execute, and allocates the CPU to
one of them.

CPU scheduling decisions may take place when a process:

1. Switches from running to waiting state.

2. Switches from running to ready state.

3. Switches from waiting to ready.

4. Terminates.

Dispatcher: (2M-2015)
The dispatcher is the module that gives control to the CPU to the process selected by the short-term
scheduler; this function involves:

 switching context

 switching to user mode

 jumping to the proper location in the user program to restart that program

Dispatch latency: The Time it takes for the dispatcher to stop one process and start another running.
Types of Scheduling: (5M-2014)(5M-2015)

Non- preemptive Scheduling:


 In a non-preemptive scheduling, a selected job runs to completion which implies that once the
CPU has been allotted a process, the process keeps the CPU until it terminates or switches to
the waiting state.

Pre-emptive Scheduling: (2M-2010)


 In preemptive scheduling, the CPU can be taken away from the process anytime

Comparison/Difference b/w Non-preemptive and Preemptive Scheduling: (5M-2012)

Non-Preemptive Scheduling Preemptive Scheduling

Once the CPU is given to a process, it The CPU can be taken away
cannot be taken away from that process

Shorter jobs must wait for completion of Shorter jobs need not wait
longer jobs

Cost is low Cost is high

Overheads are low Overheads are high due to storage of non-running


programs in the main memory

Suitable for batch processing Suitable for real time and interactive time sharing
systems

It occurs when the process either switches It occurs when the process either switches from
from running state to waiting state or when running state to ready state or from waiting state
it terminates to ready state

There is no need of context switching Context switching becomes necessary whenever a


process is preempted and a new process has to be
scheduled to the CPU

The CPU is often idle during an I/O request Maximum utilization of CPU
or invocation of wait for the termination of
one of the child process

Job is completed according to the allocated Completion time of the process in execution cannot
time be completed accurately

Scheduling is done once Rescheduling is necessary

If an interrupt occurs, the process is When an interrupt occurs, the process is


terminated temporarily suspended to be resumed later
Types of CPU Scheduling Algorithms:
 First-Come, First-Served (FCFS) Scheduling

 Shortest-Job-First Scheduling (SJFS)

 Priority Scheduling

 Round Robin (RR) Scheduling

 Multilevel Queue Scheduling

 Multilevel Feedback Queue Scheduling

1)) First-Come, First-Served (FCFS) Scheduling:


 With this scheme, the process that requests the CPU first is allocated the CPU first.

 The implementation of the FCFS policy is easily managed with a FIFO queue.

 When a process enters the ready queue, its PCB is linked onto the tail of the queue.

 When the CPU is free, it is allocated to the process at the head of the queue.

 The running process is then removed from the queue.

 Requests are scheduled in the order in which they arrive in the system.

 The FCFS algorithm is non-preemptive. Once the CPU has been allocated to a process, that
process keeps the CPU until it releases the CPU, either by termination or by requesting I/O.

 A batch processing system is a good example of FCFS.

 The average waiting time under the FCFS policy is quite long.

Advantages:

 It is simple algorithm to write and understand.

 Its non preemptive scheduling, no interrupts occur during execution of process.

 Suitable for batch system.


Disadvantages:

x Average waiting Time is not minimum in FCFS depends on processing time.

x CPU and device utilization is low.

x Not guarantee good response time.

x It leads to convoy effect. –long process behind short process

-------------------------------------------------------------------------

Example for First-Come, First-Served (FCFS) Scheduling

Process Burst Time

P1 24

P2 3

P3 3

Suppose that the processes arrive in the order: P1 , P2 , P3


The Gantt Chart for the schedule is:

 Waiting time for P1 = 0; P2 = 24; P3 = 27

 Average waiting time: (0 + 24 + 27)/3 = 17 ms

 Average turnaround time: (24+27+30)/3= 27 ms

-------------------------------------------------------------------------
Process Burst Time

P1 20

P2 4

P3 3

Suppose that the processes arrive in the order: P1 , P2 , P3


The Gantt Chart for the schedule is:

 Waiting time for P1 = 0; P2 = 20; P3 = 24

 Average waiting time: (0 + 20 + 24)/3 = 14.66 ms

 Turnaround time for P1=20, P2=24, P3=27

 Average turnaround time: (20+24+27)/3= 23.66 ms

 Response time for P1=0, P2=20, P3=24

 Average Response time=(0+20+24)/3=14.66 ms

-------------------------------------------------------------------------

2)) Shortest-Job-First (SJFS) Scheduling: (5M-2010)


 In SJF, the process with the least estimated execution time is selected from the ready queue
for execution.

 It associates with each process, the length of its next CPU burst.

 When the CPU is available, it is assigned to the process that has the smallest next CPU burst.

 If two processes have the same length of next CPU burst, FCFS scheduling is used.

 SJF algorithm can be preemptive or non-preemptive

 Two schemes:

 Non preemptive – once CPU given to the process it cannot be preempted until completes
its CPU burst.

 Preemptive – if a new process arrives with CPU burst length less than remaining time of
current executing process, preempt. This scheme is known as the Shortest-Remaining-
Time-First (SRTF).

 SJF is optimal – gives minimum average waiting time for a given set of processes.
Advantages:

 SJF can be implemented as both Non pre-emptive and preemptive.

 Preference will be given for shorter jobs and hence average waiting time reduces.

Disadvantages:

x Knowing the length of execution time of processes in advance is difficult.

x Longer jobs are subjected to longer delays.

-------------------------------------------------------------------------

Example of Non-Preemptive SJF

Consider the following set of processes that arrive at time 0 with the length of the CPU burst time in
milliseconds:

Waiting Time for P4 = 0 milliseconds

Waiting Time for P1 = 3 milliseconds

Waiting Time for P3 = 9 milliseconds

Waiting Time for P2 = 16 milliseconds

Average Waiting Time = (Total Waiting Time) / No. of Processes

= (0 + 3 + 9 + 16 ) / 4

= 28 / 4

= 7 milliseconds

Average Turnaround time=(3+9+16+24)/4=13 ms

-------------------------------------------------------------------------
Consider the following set of processes that arrive at time 0 with the length of the CPU burst time in
milliseconds:

PROCESS CPU BURST TIME

P1 5

P2 10

P3 6

P4 2

Waiting Time for P4 = 0 milliseconds

Waiting Time for P1 = 2 milliseconds

Waiting Time for P3 = 7 milliseconds

Waiting Time for P2 = 13 milliseconds

Average Waiting time=(0+2+7+13)/4=5.5 ms

Average turnaround time=(2+7+13+23)/4=11.25 ms

-------------------------------------------------------------------------
Consider the following set of processes. These processes arrived in the ready queue at the times given
in the table:

Waiting time = starting time-arrival time

Waiting Time for P1 = 10 – 1 – 0 = 9

Waiting Time for P2 = 1 – 1 = 0

Waiting Time for P3 = 17 – 2 = 15

Waiting Time for P4 = 5 – 3 = 2

Average Waiting time=(9 + 0 + 15 + 2) / 4 = 26 / 4 = 6.5 ms

-------------------------------------------------------------------------
Consider the following example:

PROCESS ARRIVAL TIME BURST TIME

P1 0 10

P2 1 4

P3 2 8

P4 3 6

Waiting time = starting time-arrival time

Waiting Time for P1 = 19 – 1 = 18

Waiting Time for P2 = 1 – 1 = 0

Waiting Time for P3 = 11 – 2 = 9

Waiting Time for P4 = 5 – 3 = 2

Average Waiting time=(18 + 0 + 9 + 2) / 4 = 29 / 4 = 7.25 ms

-------------------------------------------------------------------------

3)) Priority Scheduling: (5M-2012)


 A priority is associated with each process, and the CPU is allocated to the process with the
highest priority.

 Equal priority processes are scheduled in FCFS order.

 Priorities may be static or dynamic. Their initial values are assigned by the user or the system
at the process creation time.

 Priority scheduling can be preemptive or non-preemptive.

 A preemptive priority scheduling algorithm will preempt the CPU if the priority of the newly
arrived process is higher than the priority of the currently running process.

 A non-preemptive priority scheduling algorithm will simply put the new process at the head of
the ready queue.

 Another name give to priority scheduling is Event Driven Scheduling.


Advantages:

 Supports multi tasking system.

 Supports both preemptive and non preemptive scheduling

 Good response time.

 Low interrupt latency and high I/O bandwidth.

Disadvantage:

x Leads to starvation of indefinite blocking.

x Implementation is difficult.

x Leads to situations in which low priority process wait indefinitely for high priority processes.

-------------------------------------------------------------------------

4)) Round Robin Scheduling (RR):


 The RR Scheduling algorithm is designed especially for timesharing systems

 It is similar to FCFS scheduling, but pre-emption is added.

 Each and every process gets a small unit of CPU time (time quantum), usually 10-100 milliseconds.
After this time has elapsed, the process is preempted and added to the end of the ready queue.

 The ready queue is treated as a circular queue. The CPU scheduler goes around the ready queue,
allocating CPU to each process for a time interval of up to 1 time quantum.

 If there are n processes in the ready queue and the time quantum is q, then each process gets
1/n of the CPU time in chunk of at most q time units at once. No process waits more than (n-1) *
q time units.

-------------------------------------------------------------------------
Consider the example of a set of processes with arrive time 0, in the order P1, P2,: ….P5. The CPU burst
time and priority given by

-------------------------------------------------------------------------

Example of RR with Time Quantum = 2MS

Consider the following set of processes that arrive at time 0 with the length of the CPU burst time in
milliseconds:

 Waiting Time for P1 = 0 + (6 – 2) + (10 – 8) + (13 – 12) = 4 + 2 + 1 = 7

 Waiting Time for P2 = 2 + (8 – 4) + (12 – 10) = 2 + 4 + 2 = 8

 Waiting Time for P3 = 4

 Average Waiting Time = (Total Waiting Time) / No. of Processes

 = (7 + 8 + 4) / 3

 = 19 / 3

 = 6.33 milliseconds

-------------------------------------------------------------------------
Consider the example of 3 processes P1, P2 and P3 with the following CPU time , given time-slice=4 ms

Waiting time for P1= 0 + (10-4)=0+6=06 ms

Waiting time for P2= 4 ms

Waiting time for P3= 7 ms

Average waiting time=(6+4+7)/3 = 5.66ms

-------------------------------------------------------------------------
5)) MULTILEVEL QUEUE SCHEDULING:
 A multilevel queue scheduling algorithm partitions the ready queue into several separate queues.

 The processes are permanently assigned to one queue, based on some property of the process,
such as memory size, process priority or process type.

 Each queue has its own scheduling algorithm. For example,.

 There must also be scheduling among the queues:

 One method is to assign time-slice to each queue, with which it can schedule the various
processes in its queue. For example, interactive processes may be assigned 80% of the CPU time
and the background processes may be given 20% of the CPU.

 Another method is to execute the high priority queues first and then process the lower priority
queues.
6)) Multilevel Feedback Queue
 A process can move between the various queues; aging can be implemented this way.

Example of Multilevel Feedback Queue

Three queues:

• Q0 – time quantum 8 milliseconds

• Q1 – time quantum 16 milliseconds

• Q2 – FCFS

Scheduling:

• A new job enters queue Q0 which is served FCFS. When it gains CPU, job receives 8 milliseconds.
If it does not finish in 8 milliseconds, job is moved to queue Q1.

• At Q1 job is again served FCFS and receives 16 additional milliseconds. If it still does not
complete, it is preempted and moved to queue Q2.

Multilevel Feedback Queues

Multilevel-feedback-queue scheduler defined by the following parameters:

 number of queues

 scheduling algorithms for each queue

 method used to determine when to upgrade a process to high priority

 method used to determine when to demote a process to low priority


Algorithm Evaluation: (5M-2011)
Different methods are used for Algorithm Evaluation

 Deterministic modeling

 Queuing Models

 Simulations

 Implementation

1) Deterministic modeling

Deterministic modeling – Takes a particular predetermined workload and defines the performance of
each algorithm for that workload.

Advantages:

 It is simple and fast.

 Gives exact numbers allowing algorithms to be compared.

 It helps in describing scheduling algorithms and providing examples

Disadvantages:

x It needs exact numbers for input and its answers apply only those cases

x It is too specific and needs exact knowledge to be useful.

2) Queuing Models
 Queuing models can be used to examine the distribution of CPU and I/O bursts by a formula giving
the probability of a particular CPU burst. In addition the following computations are also possible.

 Average queue length

 Average waiting time of the queue

 Average arrival rate

Advantages:

 Useful in comparing scheduling algorithms.

 Necessary to make a number of independent assumptions.

 Useful when the number of processes running on the system is not static.

Disadvantages:

x The assumption made may not be accurate.

x Queuing model are only an approximation of a real time system.

x The accuracy of the computed results may be questionable


3) SIMULATIONS:
 Simulations require programming a model of the computer system.

 The simulation may take a long time to complete.

 For more accurate results, it may be necessary to use process information from a real system.

Advantages:

 A very accurate method of evaluating scheduling algorithms

 Simulation produce accurate results for the inputs

Disadvantages:

x Simulations are very expensive, requiring long hours of computer time

x The design, coding and debugging of the simulator is a major task.

4) IMPLEMENTATION:
 Implementation involves evaluating an algorithm by coding it, use it in the operating system and
see its working. Implementation method puts the algorithm in the real time system under
operating condition.

Disadvantages:

x The cost of implementation is highly expensive

x The environment in which the algorithm is used changes.


■■■■■■■■■■■■■■■■■■■■■■■■■■■■( UNIT 2)■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■
xxxxxxxxx(Process Synchronization and Deadlocks)xxxxxxxx

The Critical-Section Problem: (2m-2014)(5M-2012)(5M-2011)


♥ Each process has a section of code, known as a critical section
♥ It is very important that when one process executing in its critical section no other process is
allowed to execute in its critical section
♥ Only one process is allowed to execute in its critical section (even with multiple processors).
♥ So each process must first request permission to enter its critical section.
♥ Each process has the following sections of code.
 Entry Section: The code that requests permission to enter its critical section
 Exit Section: The critical section may be followed by it
 Remainder Section: The rest of the program

Peterson’s Solution: (5M-2012)


 This algorithm requires both the variables turn and flag to be shared between the processes.
int turn;
boolean flag[2];
Initially turn=0 and flag[0]=flag[1]=false;
 To enter the critical section, process Pi first sets flag[i] to be true
 If then sets turn to the value j, asserting that if other process wishes to enter the critical section,
it can do so.
 If both the processes try to enter at the same time, turn will be set to both i and j roughly at the
same time.
 But only one of these assignments will lasts; the other will be overwritten immediately.
 The final value of turn decides which of the two processes will enter its critical section first.

SEMAPHORE: (2M-2012)(8M-2014)(7M-2015)(2M-2010)

♥ A semaphore is a process synchronization tool


♥ A semaphore is an integer variable with non-negative values, which can be accessed only through two
standard operations wait() and signal().
♥ A semaphore can only be accessed using the following operations: wait() and signal().
♥ wait() operation is called when a process wants access to a resource.
♥ signal() operation is called when a process is done using a resource, or when the patron is finished
with his meal.

Wait() Signal ()
wait(Semaphore s) signal(Semaphore s)
{ {
while s<=0 ; S++;
s--; }
}
TYPES OF SEMAPHORES: (2M-2011)
Binary Semaphore:
 A semaphore whose variable is allowed to take only values of 0(busy) and 1 (free) is called binary
semaphore
 Binary semaphores are known as mutex locks, as they provide mutual exclusion.
 They are used to acquire locks.
 When a resource is available, the process in charge set the semaphore to 1 else 0.
 They can be used to solve critical-section problem for multiple processes.
Counting Semaphore:
 May have value to be greater than one, typically used to allocate resources from a pool of
identical resources.
 First the semaphore is initialized to the number of resources available
 Each process that wishes to use a resource performs a wait ( ) operation on the semaphore
(thereby decrementing the count)
 When the process releases the resources, it performs a signal ( ) operation (thereby
incrementing the count)
When the count for the semaphore becomes 0, all the resources are used and any process requesting
for the resource is blocked until the count becomes greater than 0

Dining-Philosophers Problem: (5M-2015)


 Five silent philosophers sit at a table around a bowl of noodles. A fork is placed between each
pair of adjacent philosophers.
 Each philosopher must alternately think and eat. However, a philosopher can only eat noodles
when he has both left and right forks. Each fork can be held by only one philosopher and so a
philosopher can use the fork only if it's not being used by another philosopher.
 After he finishes eating, he needs to put down both forks so they become available to others. A
philosopher can grab the fork on his right or the one on his left as they become available, but
can't start eating before getting both of them.

Deadlock: (2m-2014)(2M-2010)

Deadlock can be defined as permanent blocking of a set of Processes that either compete for system
resources or communicate with each other.
Example:
1. System has 2 tape drives.
2. P1 and P2 each hold one tape drive and each needs another one.
DEADLOCKS CHARACTERIZATION

The main characteristics of a Deadlock Situation: (2M-2014)


I. Necessary Conditions
II. Resource Allocation Graph

NECESSARY CONDITION:
Deadlock can occur if four conditions hold simultaneously in a system.
 Mutual exclusion: means only one process at a time can use a resource.
 Hold and wait: a process holding at least one resource is waiting to acquire additional
resources held by other processes.
 No preemption: a resource can be released by the process holding it, only after that process
has completed its task.
 Circular wait: A circular chain of two or more processes must exist such that each of them is
waiting for a resource held by the next number of the chain shown in figure.

RESOURCE ALLOCATION GRAPH: (5M-2015)(5M-2010)

 Deadlocks can be described more precisely in terms of directed graph called a system resource –
allocation graph.
 A set of vertices V and a set of edges E.
 V is partitioned into two types:
P = {P1, P2, …, Pn}, the set of all active processes in the system.
R = {R1, R2, …, Rm}, the set consisting of all resource types in the system.
 A directed edge from process Pi to resource type Rj denoted by Pi Rj signifies that process Pi
has requested an instance of resource type Rj and is currently waiting for that resource. This
edge is called a request edge.
 A directed edge from resource type Rj to process Pi, denoted by Rj Pi, signifies that an
instance of resource type Rj has been allocated to process Pi and is called an assignment edge.
 Pictorially, process Pi is denoted as a circle and each resource type Rj as a square.
 If a resource has more than one instance, then each instance is represented as a dot within the
square
 Once the process finishes accessing the resource, it releases it and the assignment edge is
deleted.
Example-RESOURCE ALLOCATION GRAPH

RESOURCE –ALLOCATION GRAPH WITH A DEADLOCK

In the above example:

Process P1 is holding an instance of resource R2, is waiting for an instance of resource type R1
Process P2 is holding an instance of R1 and R2, is waiting for an instance of resource type R3
Process P3 is holding an instance of R3
Suppose the process P3 requests an instance of resource type R2, since no resource instance is
currently available, a request edge P3 --> R2 is added to the graph.
Methods for Handling Deadlocks: (2M-2015)
 Deadlock Prevention
 Deadlock Avoidance.
 Recovery from Deadlock

We can deal with the deadlock problem in one of three ways:


1) We can use a protocol to prevent or avoid deadlocks , ensuring that the system will never enter a
deadlock state
2) We can allow the system to enter a deadlock state, delete it and recover
3) We can ignore the problem altogether and pretend that deadlocks never occur in the system ( this
solution is used by most operating system , including UNIX

DEADLOCK PREVENTION: (7m-2014)(8M-2015)(6M-2012)

By making sure that one of these necessary conditions does not hold well, the occurrence of a Deadlock
can be prevented.
1. Mutual Exclusion – not required for sharable resources; must hold for non sharable resources.
 For example, a printer cannot be simultaneously shared by several processes.

2. Hold and Wait – must guarantee that whenever a process requests a resource, it does not hold any
other resources.
 Require process to request and be allocated all its resources before it begins execution, or allow
process to request resources only when the process has none.
Disadvantages
x Resource allocated maybe unused for a long period--Low resource utilization;
x More Waiting time for resources ---starvation possible

3. No Preemption :
If a process that is holding some resources requests another resource that cannot be immediately
allocated to it, then all resources currently being held are released.
 Preempted resources are added to the list of resources for which the process is waiting.
 Process will be restarted only when it can regain its old resources, as well as the new ones that it is
requesting.

4. Circular Wait – impose a total ordering of all resource types, and require that each process
requests resources in an increasing order of enumeration (list).

DEADLOCK AVOIDANCE: (7M-2011)

Requires additional information about how the resources are to be requested by the processes.
 Simplest and most useful model requires that each process declare the maximum number of
resources of each type that it may need.
 A deadlock –avoidance algorithm dynamically checks the resource -allocation state to make sure
that a circular-wait condition never exists.
 Resource-allocation state is defined by the number of available and allocated resources, and the
maximum demands of the resources
 When a process requests an available resource, system must decide if immediate allocation leaves
the system in a safe state.
 System is in safe state if there exists a safe sequence of all processes.
The two deadlock avoidance algorithms are:
1. Resource allocation graph for single instance of a resource type
2. Banker’s algorithm for multiple instances of a resource type.

SAFE STATE: (2m-2014) (2M-2011)


A system is in a safe state only if there exists a safe sequence of processes P1, P2, ……, Pn where:
♥ A state is safe if the system can allocate resources to each process in some order and still avoid a
deadlock.

UNSAFE STATE: (2m-2014)


♥ If there is no allocation sequence that allows the processes to finish executing, then the system is
in an unsafe state
Basic Facts :
 If a system is in safe state  no deadlocks.
 If a system is in unsafe state  possibility of deadlock.
Avoidance  ensure that a system will never enter an unsafe state

Safe, Unsafe, Deadlock State

Data Structures for Banker’s Algorithm: (8M-2014)(9M-2012)(9M-2010)


Let n = number of processes, and m = number of resources types.

♥ Available: It is a Vector of length defining the number of available resources of each type
. If available [j] = k, there are k instances of resource type Rj available.

♥ Max: It is an n x m matrix defines the maximum demand of each process for each resource type
. If Max [i,j] = k, then process Pi may request at most k instances of resource type Rj.

♥ Allocation: It is an n x m matrix defines the number of resources of each type currently allocated
to each process.
If Allocation[i,j] = k then Pi is currently allocated k instances of Rj.

♥ Need: It is an n x m matrix indicates the remaining resource required of each process.


If Need[i,j] = k, then Pi may need k more instances of Rj to complete its task.

Need [i,j] = Max[i,j] – Allocation [i,j].


Safety Algorithm: (7M-2014)

Safety algorithm (to check for a safe state in Banker’s Algorithm ):

Let n = number of processes, and m = number of resources types.

Step2: Let work be an integer array of length m, initialized to available.


Let finish be a Boolean array of length n, initialized to false.

Step2: Find an i such that both:


o finish[i] == false
o need[i] <= work
If no such i exists, go to step 4

Step3: work = work + allocation[i];


finish[i] = true;
Go to step 2

Step4: If finish[i] == true for all i, then the system is in a safe state, otherwise unsafe.

DEADLOCK DETECTION: (9M-2010)


 Deadlock detection is the process of actually determining that a deadlock exists and identifying
the processes and resources involved in the deadlock.
 Requires overhead
 run-time cost of maintaining necessary information and executing the detection algorithm
 potential losses inherent in recovering from deadlock
 The basic idea is to check allocation against resource availability for all possible allocation
sequences to determine if the system is in deadlocked state. Of course, the deadlock detection
algorithm is only half of this strategy. Once a deadlock is detected, there needs to be a way to
recover and several alternatives exists:
 Temporarily prevent resources from deadlocked processes.
 Back off a process to some check point allowing preemption of a needed resource and restarting
the process at the checkpoint later.
 Successively kill processes until the system is deadlock free.

Deadlock Detection Algorithm Using Single Instance of Each Resource Type:


 If all resources have only a single instance, then deadlock-detection algorithm uses a variant of
resource-allocation graph called as wait-for graph.
 A wait-for graph is obtained by removing the resource nodes and collapsing the appropriate
edges.
 An edge Pi Pj in a wait-for graph implies that process Pi is waiting for process Pj to release a
resource that Pi needs. An edge Pi Pj exists in a wait-for graph if and only the corresponding
resource allocation graph contains two edges Pi Rq and Rq Pj for some resource Rq
 As before, cycles in the wait-for graph indicate deadlocks.
 This algorithm must maintain the wait-for graph, and periodically search it for cycles.
Deadlock Detection

(a) Resource allocation graph.


(b) Corresponding wait-for graph

■■■■■■■■■■■■■■■■■■■■■■■■■■■■( UNIT 3)■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■


xxxxx(Memory Management & Virtual Memory Management)xxxxx

The basic requirements of memory management are as follows:


 Address binding
 Dynamic loading
 Dynamic linking
 Logical and physical address space
 Overlays

 Address Binding:
 A program is normally saved on a disk as a binary executable file.
 The program has to be loaded into memory and placed within a process to be executed.
 Dynamic Loading: (2M-2015)
 In dynamic loading, a program or routine is not loaded until it is called.
 All routines are stored on the disk in a reloadable load format
 Useful when large amounts of code are needed to handle infrequently occurring cases
 Dynamic Linking:
 The concept of dynamic linking is similar to dynamic loading except that linking is postponed until
execution time
 Logical vs. Physical Address Space:
 An address generated by the CPU is called logical address
 The logical address is also known as virtual address
 An address generated by a memory unit is referred to as a physical address
 OVERLAYS: (5M-2014) (2M-2015)(2M-2011)(2M-2010)
♥ Overlays means replacement of a block of stored instruction or data with another
♥ Overlays are used to enable a process /programs to be larger than the amount of memory allocated
to it.
♥ The idea of overlays is to keep in memory only those instructions and data that are needed at any
given time.
♥ Needed when process is larger than amount of memory allocated to it.
♥ Implemented by user, no special support needed from operating system, programming design of
overlay structure is complex

Pass 1 70 KB
Pass 2 80 KB
Symbol table 20 KB
Common routines 30 KB

Advantages
 Requires small amount of memory for working
 No special support needed from operating system.,
 Overlays are implemented by user,
Disadvantages
x Programming design of overlay structure is complex.

---------------------------------------------------------------------
 SWAPPING: (2M-2011)
.
♥ A process can be swapped temporarily out of memory to a backing store, and then brought back into
memory for continued execution
♥ Backing store : – fast disk large enough to accommodate copies of all memory images for all users;
must provide direct access to these memory images
♥ Roll out, roll in: – swapping variant used for priority-based scheduling algorithms; lower-priority
process is swapped out so higher-priority process can be loaded and executed

Advantages
 Swapping can increase the degree of multiprogramming
 It can be used to relocate and help external fragmentation
Disadvantages
x Swapping whole partitions is expensive.
x The context switch time is high
x Swapping a process with pending i/o operations is very complex

CONTIGUOUS MEMORY ALLOCATION: (5M-2014)


The main memory must accommodate both the operating system and the various user processes. Hence,
main memory usually into two partitions:
● Resident operating system, usually held in low memory with interrupt vector
● User processes then held in high memory
NOTE:
 In DOS systems, the first 640K of memory. This portion of memory is reserved for applications,
device drivers, and memory-resident programs (TSRs). Low memory is also called conventional
memory.
 An interrupt vector is the memory location of an interrupt handler, which prioritizes interrupts and
saves them in a queue if more than one interrupt is waiting to be handled.
 An interrupt is a signal from a device attached to a computer, or from a program within the
computer, that tells the OS (operating system) to stop and decide what to do next. When an
interrupt is generated, the OS saves its execution state by means of a context switch, a procedure
that a computer processor follows to change from one task to another while ensuring that the tasks
do not conflict. Once the OS has saved the execution state, it starts to execute the interrupt
handler at the interrupt vector.
A base and limit register define a logical address space

Fragmentation: (5M-2015)
Fragmentation occurs in a dynamic memory allocation system when many of the free blocks are too
small to satisfy any request.

External Fragmentation: External Fragmentation happens when a dynamic memory allocation


algorithm allocates some memory and a small piece is left over that cannot be effectively used. If too
much external fragmentation occurs, the amount of usable memory is drastically reduced. Total memory
space exists to satisfy a request, but it is not contiguous.

Internal Fragmentation: (8M-2014)(5M-2012)(2M-2011)


Internal fragmentation is the space wasted inside of allocated memory blocks because of restriction on
the allowed sizes of allocated blocks. Allocated memory may be slightly larger than requested memory;
this size difference is memory internal to a partition, but not being used

Different possible solutions to fragmentation:


 Compaction
 Paging
 Segmentation
1) Compaction: (2m-2014)(2M-2012)(2M-2011)
♥ One solution to a problem of external fragmentation is compaction
♥ The goal of compaction is to shuffle memory contents to place all free memory together in one large
block.
♥ Compaction combines all the free areas into one continuous area
♥ Compaction is possible only if relocation is dynamic, and is done at execution time.
Advantages:
 Can easily be implemented.
 Reduces fragmentation.
 Higher degree of multi programming can be achieved
Disadvantages:
x Compaction time is high
x Requires special hardware for compaction which increases the cost
x Memory may contain information which is not used.
x Address manipulation is overhead on the OS.

2) PAGING: (2M-2011)
♥ Paging is a memory management scheme which permits the physical address of process to be non
contiguous.
♥ Divide physical memory into fixed-sized blocks called frames (size is power of 2, between 512
bytes and 8192 bytes).
♥ Divide logical memory into blocks of same size called pages.
♥ Keep track of all free frames.
♥ To run a program of size n pages, need to find n free frames and load program.
♥ Set up a page table to translate logical to physical addresses.

Example:

Advantages:
 Paging supports time sharing system.
 It avoids fragmentation resulting in better memory and processor utilization
 It supports virtual memory
 Compaction overhead are eliminated
 Sharing of common code is possible in a paging system
Disadvantages:
x Paging sometimes suffer from the page breaks causing internal fragmentation, due to wastage of
memory in the last page allocated. On an average, half page per process is wasted.
x When number of pages are large, it is difficult to maintain the page table.
x Hardware cost is high.
x Paging increases the context switch time , slowing down the CPU.

HIT RATIO: (2m-2014)(2M-2012)

♥ The percentage of times that a particular page number is found in the TLB is called hit ratio.
♥ A 80% hit ratio means that the desired page number in the TLB is found 80 percent of time
♥ Short for translation look-aside buffer, a table in the processor memory that contains information
about the pages in memory the processor has accessed recently.

3) Segmentation: (2M-2011)(6M-2010)
 A program is a collection of segments. A segment is a logical unit such as:
 main program,
 procedure,
 function,
 method,
 object,
 local variables, global variables,
 common block,
 stack,
 symbol table, arrays
 Segmentation is a memory-management scheme that supports user view of memory

Segmentation: User’s View of a Program


♥ Segmentation is a memory management scheme which supports the programmer’s or user view of a
memory.
♥ Programmers always think of their programs as a collection of logically related entities or modules,
such as subroutines, procedures, function, global local variables.
♥ Each of the segments is of different lengths. Elements in a segment are identified by the offset
from the beginning of the segment.
♥ When user program is complied, the compiler automatically creates the segments of the input
program.
♥ Logical address of a segment is given by segment number, offset.
♥ The components of single segments reside on continuous area. However different segments of the
same process may reside non continuous area of physical memory.
♥ Memory is allocated dynamically. When the program is compiled or assembled.
♥ For each segment, segment number and segment table is created.
♥ Segments are logical units visible to the user program and varying in size.

DIFFERENCE B/W SEGMENTATION & PAGING (8M-2014)

Segmentation Paging
-Program is divided in to variable sized ·Programs are divided in to fixed size pages.
segments.
-User is responsible for dividing the program in -Division is performed by the OS.
to segments.
-Segmentation is slower than paging. -Paging is faster than segmentation.
-Visible to user. -Invisible to user.
-Eliminates internal fragmentation. -Suffers from internal fragmentation.
-Suffers from external fragmentation. - No external fragmentation.
-Process or user segment number, offset to -Process or user page number, offset to
calculate absolute address. calculate absolute address.
demand paging: ( 7M-2015)(5M-2012)(5M-2010)
Demand paging is a type of swapping in which pages of data are not copied from disk to RAM until they
are needed. In contrast, some virtual memory systems use anticipatory paging, in which the operating
system attempts to anticipate which data will be needed next and copies it to RAM before it is actually
required.
Thus Demand Paging is bringing a page into memory only when it is needed.

Advantages:
 Only fetch the pages of data that a program actually uses from the disk
 If a program only needs to reference a fraction of its data during each timeslice of execution,
this can significantly reduce the amount of time spent copying data to and from the disk
 Individual pages of a program’s data can be brought into the memory as needed, making the limit
on the maximum amount of data a program can reference the amount of space available on the
disk, not the amount of main memory
 Outweigh the disadvantages for most applications
 Demand paging the choice for most current workstation/PC operating systems

Page Fault (5M-2014)(10M-2010)


When the page (data) requested by a program is not available in the memory, it is called as a page fault.
This usually results in the application being shut down.

A page is a fixed length memory block used as a transferring unit between physical memory and an
external storage. A page fault occurs when a program accesses a page that has been mapped in address
space, but has not been loaded in the physical memory.

THRASHING: (2m-2014) (2M-2015)


 If the process does not have this number of frames, it will quickly page fault .at this point, it must
replace some
 However , since all its pages are in active use, it must replace a page that will be needed again right
away
 Thrashing is a condition in which excessive paging operations are taking place.

FIFO Page Replacement algorithm (7M-2014)(5M-2011)


 A simple and obvious page replacement strategy is FIFO, i.e. first-in-first-out.
 As new pages are brought in, they are added to the tail of a queue, and the page at the head of the
queue is the next victim. In the following example, 20 page requests result in 15 page faults:
 Although FIFO is simple and easy, it is not always optimal, or even efficient.
 An interesting effect that can occur with FIFO is Belady's anomaly, in which increasing the number
of frames available can actually increase the number of page faults that occur!
FIFO page-replacement algorithm
BELADY’S ANOMALY: (5M-2012)
In computer storage, Bélády's anomaly is the name given to the phenomenon where increasing the
number of page frames results in an increase in the number of page faults for a given memory access
pattern. This phenomenon is commonly experienced when using the First in First Out (FIFO) page
replacement algorithm.

LRU Page Replacement algorithm: (5M-2015)(9M-2012)(5M-2011)


 The prediction behind LRU, the Least Recently Used, algorithm is that the page that has not
been used in the longest time is the one that will not be used again in the near future. ( Note the
distinction between FIFO and LRU: The former looks at the oldest load time, and the latter
looks at the oldest use time. )
 Some view LRU as analogous to OPT, except looking backwards in time instead of forwards.
( OPT has the interesting property that for any reference string S and its reverse R, OPT will
generate the same number of page faults for S and for R. It turns out that LRU has this same
property. )
 Figure illustrates LRU for our sample string, yielding 12 page faults, ( as compared to 15 for
FIFO and 9 for OPT. )

LRU page-replacement algorithm:

The memory manager is responsible for allocating primary memory to processes and for assisting the
programmer in loading and storing the contents of the primary memory.
Since primary memory can be space-multiplexed, the memory manager can allocate a portion of primary
memory to "each process" for its own use.
Available memory is called hole.

Describing the strategies below: (5M-2014) (5M-2010)


Best fit: The allocator places a process in the smallest block of unallocated memory in which it will fit.
Worst fit: The memory manager places a process in the largest block of unallocated memory available.
First fit: There may be many holes in the memory, so the operating system, to reduce the amount of
time it spends analyzing the available spaces, begins at the start of primary memory and allocates
memory from the first hole it encounters large enough to satisfy the request.
■■■■■■■■■■■■■■■■■■■■■■■■■■■■( UNIT 4)■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■
(File Systems, File System Implementation & Disk Management)

FILE ATTRIBUTES: (5M-2014)(2m-2011)


File attributes varies from one OS to other. The common file attributes are:
♥ Name:- The symbolic file name is the only information kept in human readable form.
♥ Identifier:- The unique tag, usually a number, identifies the file within the file system. It is the
non-readable name for a file.
♥ Type:- This information is needed for those systems that supports different types.
♥ Location:- This information is a pointer to a device and to the location of the file on that device.
♥ Size:- The current size of the file and possibly the maximum allowed size are included in this
attribute.
♥ Protection:-Access control information determines who can do reading, writing, execute and so on.
♥ Time, data and User Identification:- This information must be kept for creation, last
modification and last use. These data are useful for protection, security and usage monitoring.

File Operations: (2M-2011)(6M-2010)


♥ Creating a file:- Two steps are necessary to create a file. First space in the file system for file
is found. Second an entry for the new file must be made in the directory. The directory entry
records the name of the file and the location in the file system.
♥ Writing a file:- System call is mainly used for writing in to the file. System call specify the name
of the file and the information i.e., to be written on to the file. Given the name the system search
the entire directory for the file. The system must keep a write pointer to the location in the file
where the next write to be taken place.
♥ Reading a file:- To read a file system call is used. It requires the name of the file and the
memory address. Again the directory is searched for the associated directory and system must
maintain a read pointer to the location in the file where next read is to take place.
♥ Delete a file:- System will search for the directory for which file to be deleted. If entry is
found it releases all free space. That free space can be reused by another file.
♥ Truncating the file:- User may want to erase the contents of the file but keep its attributes.
Rather than forcing the user to delete a file and then recreate it, truncation allows all attributes to
remain unchanged except for file length.
♥ Repositioning within a file:- The directory is searched for appropriate entry and the current
file position is set to a given value. Repositioning within a file does not need to involve actual i/o. The
file operation is also known as file seeks.

Functions of file management: (4M-2014)


 To provide storage of data
 Mapping logical file address to physical disk addresses.
 Management of disk space and its allocation and de allocation.
 Keeping track of all files.
 To optimize performance.
 To provide I/O support for various types of storage devices.
 To provide I/O support for multi-users.
 To provide support for protection and sharing of files recovery.
 To provide standard set of I/O interface routines.
File Access Methods: (5M-2015)(5M-2011)
The information stored in the file can be accessed as follows:
 Sequential Access
 Direct Access
 Index Access

1) Sequential Access:
 Information from the file is accessed in order, one record after the other. Compilers, multimedia
applications, sound files, etc. are the most common examples of programs using sequential access.
 In case of a read operation, the record at the location pointed by the file pointer is read and the
file pointer is then advanced to the next record.
 In case of write operation, the record is written to the end of the file and pointer is advanced to
end of new record.

2) Direct access:
 Direct access method is also called related access. It is based on a disk model of a file, because
disks allow random access to any file block
 In direct access method records can be read/write randomly without any order. In this model, the
file is viewed as a numbered sequence of blocks or records
 The block number provided by the user to the operating system is a relative block number. A
relative block number is an index relative to the beginning of the file. Thus, the first relative block
of the file is 0, the next is 1 and so on.
 The relative block number allows the operating system to decide where the file should be placed and
helps the user from accessing portions of the file system that may not be a part of the file.

3) Indexed Method:
 This method involves the construction of an index for the file. The index contains pointers to the
various blocks.
 To find a record in the file, the index must be searched first, and then use the pointer to access
the file directly and to find the desired record.
 The primary index file would contain pointers to secondary index files which would then point to the
actual data items.

DIRECTORY STRUCTURE: (2m-2014)


 Sometimes the files systems consist of millions of files. To manage these files, there is need to
organize the files
 To manage the data stored on a disk, the disk is divided into one more partitions also known as
volumes and each partition contains information about the files in it. This information is stored
in directory.
Operations Performed on Directory: (2m-2011)
 Search for a file:- Directory structure is searched for finding particular file in the directory.
 Create a file:- New files can be created and added to the directory.
 Delete a file:- when a file is no longer needed, we can remove it from the directory.
 List a directory:- We need to be able to list the files in directory and the contents of the
directory entry for each file in the list.
 Rename a file:- Name of the file must be changeable when the contents or use of the file is
changed. Renaming allows the position within the directory structure to be changed.
 Traverse the file:- it is always good to keep the backup copy of the file so that or it can be used
when the system gets fail or when the file system is not in use.

TYPES OF DIRECTORY STRUCTURES: (7M-2014)


 Single level directory
 Two level directory
 Tree structured directories

1) Single level directory: (5M-2012)


 This is the simplest directory structure. All the files are contained in the same directory which is
easy to support and understand.

Disadvantages:-
x Not suitable for a large number of files and more than one user.
x Because of single directory files, files require unique file names.
x Difficult to remember names of all the files as the number of files increases.
x MS-DOS OS allows only 11 character file name where as UNIX allows 255 character.

2) Two-Level Directory:
 Separate directory for each user is created(MFD-Master File Directory, UFD-User File Directory)
 To create a file for a user, the operating system searches only that user’s UFD to check whether
another file of that name exists. To delete a file, the operating system searches for that file within
the UFD of that user. Therefore, it cannot delete another user’s file with the same name, by
mistake
Two-Level Directory Disadvantages:
x This structure isolates one user from another and thus forms disadvantage when the users want
to cooperate on some task and want to access one another’s files
x To access another’s file the path (user name and the path name) should be specified.

3) Tree-Structured Directories:
 A tree is the most common directory structure
 The tree has a root directory. Every file in the system has a unique path name.
 A directory contains a set of files and subdirectories. A directory is treated like any other file
and all the directories have the same internal format.
 Each user has a current directory that contains most of the files of current interest to the
user
 When a file is to be accessed, the current directory is searched first. If the file is in another
directory then the user can specify the path or change the directory holding the required file as
the current directory.

Path name can be of two types: (2M-2011)


1) An absolute path begins at the root and follows a path down to the specified file, giving the
directory names on the path
Example: root/spell/mail/prt/first
2) A relative path defines a path from the current directory prt/first

FILE ALLOCATION METHODS: (5M-2012)(6M-2011)(5M-2010)


An allocation method refers to how disk blocks are allocated for files:
 Contiguous allocation
 Linked allocation
 Indexed allocation
1) CONTIGUOUS ALLOCATION: (2m-2012)
 Each file occupies a set of contiguous (adjoining) blocks on the disk.
 Disk addresses define a linear ordering on the disk. All the successive records of a file are
adjacent to each other which increase the accessing speed of records.
 Simple – only starting location (block #) and length (number of blocks) are required.
 Random access.

Advantages:
 Accessing a file that has been allocated contiguously is easy
 Both sequential and direct access can be supported by contiguous allocation
 Efficient usage of memory and CPU

Disadvantages:
x It is very difficult to find contiguous free blocks for a new file
x If a little space is allocated to the file, then the file cannot be extended
x If more space is allocated, memory is wasted small files

Contiguous Allocation of Disk Space

2) LINKED ALLOCATION:
 Linked list solves all the problem of contiguous allocation
 With linked allocation, each file is a linked list of disk blocks
 The disk blocks may be scattered anywhere on the disk
 The directory contains a pointer to the first and last blocks of the file
 Each block contains a pointer to the next block
Advantages:
 There is no need to declare the size of a file when that file is created
 It never requires disk compaction
Disadvantages:
x It can be used effectively for only sequential access files and it does not support direct access.
x Extra space is required to store pointers
x There is no reliability
x It takes longer seek time

Linked Allocation of Disk Space

3) Indexed Allocation:
 A modification of linked allocation where the disk block pointers for a file are all placed in an index
block
 File indexes are stored in separate block and the entry for the file in the file allocation table
points to that block
 Allocation may be on basis of either fixed size block or variable size block
Advantages:
 Provides efficient random access
Indexed Allocation
Free space management: (5M-2014)(5M-2010)
 Since disk space is limited, need to reuse the space from deleted files for new files, if possible.
 To keep track of free disk space, the system maintains a free-space list.
 The free-space list records all free disk blocks that are not being used.
 To create a file, the free-space list is searched for the required amount of space, and allocated
that space to the new file.
 This space is then removed from the free-space list.
 When a file is deleted, its disk space is added to the free-space list.

The methods to manage free disk blocks are:


 Bit vector
 Linked list
 Grouping
 Counting

1) Bit Vector
 Frequently, the free-space list is implemented as a bit map or bit vector.
 Each block is represented by 1 bit.
 If the block is free, the bit is 1;
 If the block is allocated, the bit is 0.
 For example, consider a disk where blocks 2, 3, 4, 5, 8, 9, are free and the rest of the blocks are
allocated.
 The free-space bit map would be 0011110011000…
 The main advantage of this approach is its relative simplicity and its efficiency in finding the
first free block or consecutive free blocks on the disk.
 But it takes more space to store the bitmap,it occupies large amount of main memory

2) Linked List:
 Another approach to free-space management is to link together all the free disk blocks, keeping
a pointer to the first free block in a special location on the disk and caching it in memory.
 This first block contains a pointer to the next free disk block, and so on.

3) Grouping:
 A modification of the free-list approach is to store the addresses of n free blocks in the first
free block.
 The first n-1 of these blocks are actually free. The last block contains the addresses of other n
free blocks, and so on.
 The addresses of a large number of free blocks can now be found quickly, unlike the situation
when the standard linked-list approach is used.

4) Counting:
 The address of the first n free block and the number of free contiguous blocks that follow the
first block.
 Each entry in the free-space list then consists of a disk address and a count.
 Although each entry requires more space than would a simple disk address, the overall list will be
shorter, as long as the count is generally greater than 1.
METHODS OF DATA RECOVERY IN FILE SYSTEMS: (5M-2011)
 Files and directories are kept both in main memory and on disk
 Care must be taken to ensure that system failure does not result in loss of data or in data
inconsistency.
Important methods to recover files from system failures are:
 Consistency checking
 Backup and Restore

1) Consistency Checking:
 In case the system crashes before it writes all the modified blocks, the file system is left in an
inconsistent state.
 A special program is run at reboot time to check for and correct disk inconsistencies.
 The consistency-checker a system program such as fsck in Unix and chkdsk in MS DOS compares
the data in the directory structure with the data blocks in disk and tries to fix any
inconsistencies it finds.

2) BACKUP & RESTORE:


 Sometimes hard disks or magnetic disks fail and therefore it is important to backup the files
frequently.
 Utility programs can be used to back up data from disk to another storage device, such as floppy
disk, magnetic disk, optical disk or another disk.
 The loss of an individual file or an entire disk can then be recovered by performing restoring of
data from back up.

Floppy Disks: (5m-2011)


 A floppy disks is a data storage device that is composed of a circular piece of thin,
flexible(floppy) magnetic storage medium encased in a square or rectangular plastic wallet
 Floppy disks are read and written by a floppy disk drive
 They are inexpensive magnetic disks
 The structure of floppy disk is same as hard disk
 The storage capacity of floppy disks is typically 1.44mb or so on
Advantages of Floppy disk:
 Inexpensive
 Non volatile
 Portable
 Easy to use and maintain
Disadvantage of Floppy disk
x It can be physically damaged
x It can be corrupted easily
x Slow access rate
x Large amount of data cannot be stored.
DISK SCHEDULING ALGORITHMS: (7M-2015)(9M-2012)

 The performance of a computer system depends on how fast a disk I/O requested is serviced.
 For each I/O request, first, a head is selected. Then the head is moved over the destination track.
After that, the disk is rotated to position the desired sector under the head. Finally, the I/O
operation is performed.
The Different disk scheduling algorithms are:
 First Come First Serve (FCFS) Scheduling
 Shortest seek Time First(SSTF) Scheduling
 SCAN Scheduling
 Circular SCAN(CSCAN) Scheduling
 LOOK Scheduling
 Circular LOOK(C-LOOK) Scheduling

1) FCFS Scheduling:- [First Come First Serve Scheduling] (6M-2011)(5M-2010)


 Simplest form of disk scheduling algorithm
 It processes requests in the same order as they arrive
 This technique improves the response time
 It involves a lot of random head disk movements decreasing the disk throughput
 This algorithm is used in the small systems where I/O efficiency is not very important
Example: Consider an example of a disk queue with requests for I/O to blocks on cylinders 98, 183,
37,122,14,124,65,67. If the disk head is initially at cylinder 53, calculate the head movement and
average head movement
Solution: The total head movement is
=(53 to 98) + (98 to 183) + (183 to 37) + (37 to 122) + (122 to 14) + (14 to 124) + (124 to 65) + (65 to
67)=45 +85+146+85+108+110+59+2= 640 tracks
The average head movement is
total number of head movements / number of requests = 640 /8 =80 tracks

FCFS
2) SSTF Scheduling : [Shortest-Seek-time First] (5M-2012)
 The shortest-seek-time-first selects the request with the minimum seek time from the current
head position. i.e all the requests close to the current head position is serviced first before moving
the head away to service other requests
 Since seek time increases with the number of cylinders traversed by the head, SSTF chooses the
pending requests closest to the current head position
Example: Consider an example of a disk queue with requests for I/O to blocks on cylinders 98, 183,
37,122,14,124,65,67. If the disk head is initially at cylinder 53, calculate the head movement , average
waiting time.
Solution: The total head movement is
=(53 to 65) + (65 to 67) + (67 to 37) + (37 to 14) + (14 to 98 )+ (98 to 122) + (122 to 124) + (124 to
183)=12 +2+30 +23+84+24+2 +59= 236 cylinders
Average waiting time = 236/8 =32.37 cylinders

SSTF
3) SCAN SCHEDULING:
 In the SCAN algorithm, the disk arm starts at one end of the disk, and moves toward the other end,
servicing requests as it reaches each cylinder until it gets to the other end of the disk
 At the other end, the direction of head movement is reversed and servicing continues. The head
continuously scans back and forth across the disk
 The SCAN algorithm is also called elevator algorithm since the disk arm just like the elevator in a
building, first services all the requests going up, and then reversing to service the requests in the
other direction.
Example: Consider an example of a disk queue with requests for I/O to blocks on cylinders 98, 183,
37,122,14,124,65,67. If the disk head is initially at cylinder 53, and direction is towards 0, calculate the
head movement , average waiting time
Solution: From the current position 53, the head will service 37 and 14. At cylinder 0, the arm will
reverse and will move towards the other end of the disks, servicing the requests 65, 67, 98,122,124 and
183.
Total Head Movement = (53 to 37) + (37 to 14) + (14 to 0) + (0 to 65) + (65 to 67 )+ (67 to 98) + (98
to 122) + (122 to 124) + (124 to 183)=16 +23+14+65+2+31+24+2+59=236 cylinders
The average waiting time= 236/8= 32.37 cylinders.

SCAN SCHEDULING
4) Circular SCAN(CSCAN) Scheduling

 Circular C-Scan scheduling is a variant of SCAN designed to provide a more uniform wait time than
SCAN
 It overcomes the drawback of SCAN scheduling where some requests must wait for longer time
compared to others which are serviced immediately
 In C-SCAN the head moves from one end of the disk to the other servicing requests as it goes.
When the head reaches the other end, however it immediately returns to the beginning of the disk,
without servicing any requests on the return trip
 C-SCAN treats the cylinders as a circular list that wraps around from the last cylinder to the first
one.
Example: Consider an example of a disk queue with requests for I/O to blocks on cylinders 98, 183,
37,122,14,124,65,67. If the disk head is initially at cylinder 53, and direction is towards outside,
calculate the head movement, average waiting time
Solution: From the current position 53, the head will service 65, 67, 98, 122, 124, 183, 199. Then head
returns to the beginning of the disk 0, and will service 14, 37.
Total Head Movement = (53 to 65) + (65 to 67) + (67 to 98) + (98 to 122) + (122 to 124 )+ (124 to
183) + (183 to 199) + (199 to 0) + (0 to 14) + (14 to 37) =12
+2+31+24+2+59+16+199+14+23=382 cylinders
The average waiting time=383/8= 47.75 cylinders.

C-SCAN
5) LOOK SCHEDULING:
 Both SCAN and C-SCAN scheduling algorithm move the disk arm across the full width of the disk
 A variant of SCAN and C-SCAN is LOOK and C-LOOK
 In this method, the arm only goes as far as the last request in each direction. It then reverses its
direction immediately, without reaching the end of the disk.
 LOOK scheduling as in SCAN, requests are served when the head moves in both directions.
Example: Consider an example of a disk queue with requests for I/O to blocks on cylinders 98, 183,
37,122,14,124,65,67. If the disk head is initially at cylinder 53, and direction is towards outside,
calculate the head movement, average waiting time
Solution: From the current position 53, the head will service 65, 67, 98, 122, 124, 183. Then head
reverses its direction immediately, and will service 37 and 14.
Total Head Movement = (53 to 65) + (65 to 67) + (67 to 98) + (98 to 122) + (122 to 124 )+ (124 to
183) + (183 to 37) + (37 to 14)=12 +2+31+24+2+59+146+23=299 cylinders
The average waiting time= 299/8= 37.375 cylinders.

6) C-LOOK SCHEDULING
 This is a variation of LOOK in which the head starts at the innermost cylinder requests and
moves outward fulfilling requests until it reaches the last request. Then it moves back to the
innermost request again without servicing any request in the reverse direction.
Example: Consider an example of a disk queue with requests for I/O to blocks on cylinders 98, 183,
37,122,14,124,65,67. If the disk head is initially at cylinder 53, and direction is towards outside,
calculate the head movement, average waiting time
Solution: From the current position 53, the head will service 65, 67, 98, 122, 124, 183. Then head
reverses its direction immediately, and will service 14 and 37.
Total Head Movement = (53 to 65) + (65 to 67) + (67 to 98) + (98 to 122) + (122 to 124 ) + (124 to
183) + (183 to 14) + (14 to 37) =12 +2+31+24+2+59+169+23=322 cylinders
The average waiting time=322/8= 40.25 cylinders.
C-LOOK
Disk formatting (2M-2015)
 Disk formatting is the process of preparing a data storage device such as a hard disk drive,
solid-state drive, floppy disk or USB flash drive for initial use.
 Low-level formatting or Physical formatting involves dividing disk into sectors so that the disk
controller can read and write.
 Low-level formatting fills the disk with a special data structure for each sector that consists of
a header, a data area and a trailer. The header and trailer contain information used by the
disk controller, such as sector number and an error-correcting code(ECC)

Virtual memory: (2m-2014)(2M-2010)


Virtual Memory is a feature of an operating system (OS) that allows a computer to compensate for
shortages of physical memory by temporarily transferring pages of data from random access memory
(RAM) to disk storage.

♥ Virtual Memory is a technique that allows the execution of processes that may not be completely in
memory. One major advantage of this scheme is that programs can be larger than physical memory.
Further, virtual memory abstracts main memory into an extremely large, uniform array of storage,
separating logical memory as viewed by the user from physical memory. This technique frees
programmers from the concerns of memory-storage limitations.
♥ Virtual memory is a memory management technique that allows the execution of processes, which
may not be completely in memory.
♥ The main advantage of this scheme is that user programs can be larger than the physical memory.
♥ The operating system keeps only those parts of the program in memory, which are required during
execution.
♥ The rest of it is kept on the disk.
♥ In fact Virtual memory allows execution of partially loaded processes, allowing user programs to be
larger than the physical memory.

SWAP-SPACE MANAGEMENT: (5M-2015) (6M-2012)(5M-2011)


♥ Swap space is a specially formatted area of hard-disk that the operating system can use while it is
managing the real memory, or machine RAM, of the computer.
♥ Swapping is the process of moving out process temporarily out of the memory when it reaches
critically low point to the swap space (on the disk) to free available memory.
♥ Modern computers combine swapping with virtual memory techniques and swap pages
♥ Swapping space decreases system performance since disk access is much slower than memory access
♥ The main goal for the design and implementation of swap space is to provide the best throughput for
the virtual memory system (Throughput is a measure of how many units of information a system can
process in a given amount of time.)
♥ Swap space is used in various ways by different operating system depending on the memory
management algorithms in use. For instance
♥ Some system that implements swapping may use swap space to hold an entire process image,
including the code and data
♥ Paging systems simply store pages that have been pushed out of the memory
♥ The amount of swap-space needed on a system depends on
♥ The amount of physical memory
♥ The amount of virtual memory
♥ The way in which the virtual memory is used
♥ Examples where some system recommends the amount of swap-space to be set aside
o Solaris-sets swap space equal to the amount by which virtual memory exceeds page able physical
memory
o Linux sets swap space to double amount of physical memory
o Unix sets combination of swapping and paging as paging hardware.
♥ A swap-space can reside in one of the two places
o In file system
o In a separate disk partition(raw partition)

■■■■■■■■■■■■■■■■■■■■■■■■■■■■( UNIT 5)■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■


xxxxxxxxxxxxxxx(Protection and Security)xxxxxxxxxxxxxxxx

ENCRYPTION: (2M-2015)(2M-2011)

♥ Encryption is the process of converting data to unrecognizable or "encrypted” form.


♥ It is commonly used to protect sensitive information so that only authorized parties can view it
♥ This include files and storage devices, as well as data well transferred over wireless networks and the internet

Types of Viruses: (5m-2014)(5M-2015)(5M-2011)


 Boot Virus: This type of virus affects the boot sector of a floppy or hard disk. This is a crucial
part of a disk, in which information on the disk itself is stored together with a program that makes
it possible to boot (start) the computer from the disk.
 Macro Virus: Macro viruses infect files that are created using certain applications or programs
that contain macros.
 Directory Virus: Directory viruses change the paths that indicate the location of a file.
 Polymorphic Virus: Polymorphic viruses encrypt or encode themselves in a different way (using
different algorithms and encryption keys) every time they infect a system.
 File Infectors: This type of virus infects programs or executable files (files with an .EXE or .COM
extension). When one of these programs is run, directly or indirectly, the virus is activated,
producing the damaging effects it is programmed to carry out.
 Encrypted Viruses : This type of viruses consists of encrypted malicious code, decrypted module.
The viruses use encrypted code technique which make antivirus software hardly to detect them. The
antivirus program usually can detect this type of viruses when they try spread by decrypted
themselves.
 Network Virus: Network viruses rapidly spread through a Local Network Area (LAN), and
sometimes throughout the internet. Generally, network viruses multiply through shared resources,
i.e., shared drives and folders
 Stealth Viruses: Stealth Viruses is some sort of viruses which try to trick anti-virus software by
intercepting its requests to the operating system. It has ability to hide itself from some antivirus
software programs. Therefore, some antivirus program cannot detect them.
 Multipartite Viruses: Multipartite viruses are distributed through infected media and usually
hide in the memory. Gradually, the virus moves to the boot sector of the hard drive and infects
executable files on the hard drive and later across the computer system.
 Worms: A worm is technically not a virus, but a program very similar to a virus; it has the ability to
self-replicate, and can lead to negative effects on the system and most importantly they are
detected and eliminated by anti viruses.
THREAD: (2M-2011)(2M-2010)
♥ A thread is a basic unit of CPU utilization, consisting of a program counter, a stack, and a set of
registers, (and a thread ID. )
♥ Threads are small processes that for parts of a larger process.

Explain ageing: (2M-2012)


 A solution to the problem of indefinite blocking low-priority process is ageing
 Ageing is a technique of gradually increase the priority of process that wait in the system for
long time

Explain seek time?(2M-2014)(2M-2012)


Seek time is the time required for information to be located on a disk by a disk drive. The
lower this value is, the faster the drive will be able to find or read data.

Explain Ageing? (2M-2012)

Types of files: (2M-2015)(2M-2010)

File type Usual extension Function


Executable exe, com, bin Read to run machine language program
Source code c, cc, java, pas Source code in various languages
Text txt, doc Textual data, documents
Word processor wp, tex, rtf, doc Various word-processor formats
Library lib, a, so, dell Libraries of routines for programmers
Print or view pdf, jpg, png Libraries of binary file in format for printing
Archive Zip,tar,rar Related files grouped into one file, sometime
compressed for archiving or storing
Multimedia Mp3, mov, avi, mp4 Binary file containing audio or A/V information

=================== (Tareq ^_^)=======================

You might also like