ITT 05101 - Operating System
ITT 05101 - Operating System
3
Operating System Functions
• Concurrency: Doing many things simultaneously (I/0,
processing, multiple programs, etc.) Several users work at the
same time as if each has a private machine
• I/O devices: let the CPU work while a slow I/O device is
working
• Memory management: OS coordinates allocation of memory
and moving data between disk and main memory.
• Files: OS coordinates how disk space is used to store multiple
files
• Distributed systems & networks: allow a group of workstations
to work together on distributed hardware
4
Operating System Functions
• OS functions have evolved in response to
• Efficient utilization of resources
6
Memory Layout for a Simple Batch
System
job1 job2 job3 ……. jobn
End of
Start of batch
batch
7
Multi programmed Batch Systems
Several jobs are kept in main memory at the same time,
and the CPU is multiplexed among them.
8
OS Features Needed for
Multiprogramming
• Allocation of devices.
9
Time-Sharing Systems–Interactive
Computing
• The CPU is multiplexed among several jobs that are kept in memory
and on disk (the CPU is allocated to a job only if the job is in memory).
• On-line system must be available for users to access data and code.
10
Desktop Systems
11
Parallel Systems
• fail-soft systems
12
Parallel Systems (Cont.)
• Asymmetric multiprocessing
• Each processor is assigned a specific task; master processor
schedules and allocated work to slave processors.
• More common in extremely large systems
13
Symmetric Multiprocessing
Architecture
14
Distributed Systems
• Distribute the computation among several physical processors.
• Loosely coupled system – each processor has its own local
memory; processors communicate with one another through
various communications lines, such as high-speed buses or
telephone lines.
• Advantages of distributed systems.
• Resources Sharing
• Computation speed up – load sharing
• Reliability
• Communications
15
Distributed Systems (cont)
16
General Structure of Client-Server
17
Clustered Systems
• Clustering allows two or more systems to share storage.
18
Real-Time Systems
• Often used as a control device in a dedicated application such as
controlling scientific experiments, medical imaging systems,
industrial control systems, and some display systems.
19
Real-Time Systems (Cont.)
• Hard real-time:
• Secondary storage limited or absent, data stored in short term
memory, or read-only memory (ROM)
• Conflicts with time-sharing systems, not supported by
general-purpose operating systems.
• Example embedded systems include medical systems such as
heart pacemakers and industrial process controllers.
• Soft real-time:
• Limited utility in industrial control of robotics
• Useful in applications (multimedia, virtual reality) requiring
advanced operating-system features.
20
Handheld Systems
• Cellular telephones
• Issues:
• Limited memory
• Slow processors
21
Types of Operating
Systems
Operating systems are there from the
very first computer generation and
they keep evolving with time. In this
chapter, we will discuss some of the
important types of operating systems
which are most commonly used.
Batch operating system
• The users of a batch operating system do not interact with the computer directly. Each
user prepares his job on an off-line device like punch cards and submits it to the
computer operator. To speed up processing, jobs with similar needs are batched
together and run as a group. The programmers leave their programs with the operator
and the operator then sorts the programs with similar requirements into batches.
• CPU is often idle, because the speed of the mechanical I/O devices is slower than the
CPU.
24
Time-sharing operating systems
• Time-sharing is a technique which enables many people, located at various terminals, to use a
particular computer system at the same time. Time-sharing or multitasking is a logical extension
of multiprogramming. Processor's time which is shared among multiple users simultaneously is
termed as time-sharing.
• The main difference between Multiprogrammed Batch Systems and Time-Sharing Systems is that
in case of Multiprogrammed batch systems, the objective is to maximize processor use, whereas
in Time-Sharing Systems, the objective is to minimize response time.
• Multiple jobs are executed by the CPU by switching between them, but the switches occur so
frequently. Thus, the user can receive an immediate response. For example, in a transaction
processing, the processor executes each user program in a short burst or quantum of
computation. That is, if n users are present, then each user can get a time quantum. When the
user submits the command, the response time is in few seconds at most.
25
Advantages AND Disadvantages
Advantages of Timesharing operating systems are as follows −
• Problem of reliability.
26
Distributed operating System
• Distributed systems use multiple central processors to serve multiple real-time applications and multiple
users. Data processing jobs are distributed among the processors accordingly.
• The processors communicate with one another through various communication lines (such as high-speed
buses or telephone lines). These are referred as loosely coupled systems or distributed systems.
Processors in a distributed system may vary in size and function. These processors are referred as sites,
nodes, computers, and so on.
• With resource sharing facility, a user at one site may be able to use the resources available at another.
• Speedup the exchange of data with one another via electronic mail.
• If one site fails in a distributed system, the remaining sites can potentially continue operating.
28
Advantages AND
Disadvantages
The advantages of network operating systems are as follows −
• Upgrades to new technologies and hardware can be easily integrated into the system.
• Remote access to servers is possible from different locations and types of systems.
29
Real Time operating System
• A real-time system is defined as a data processing system in which the time interval
required to process and respond to inputs is so small that it controls the environment. The
time taken by the system to respond to an input and display of required updated
information is termed as the response time. So in this method, the response time is very
less as compared to online processing.
• Real-time systems are used when there are rigid time requirements on the operation of a
processor or the flow of data and real-time systems can be used as a control device in a
dedicated application. A real-time operating system must have well-defined, fixed time
constraints, otherwise the system will fail. For example, Scientific experiments, medical
imaging systems, industrial control systems, weapon systems, robots, air traffic control
systems, etc.
30
Process
management
Definition of Concepts
• A process is a program in execution. It is the unit of work in a
modern time-sharing system. The execution of a process must
progress in a sequential fashion.
33
PROCESS STATE
• As a process executes, it changes state
according to environmental constraints.
34
PROCESS STATES
• New: The process is being created.
35
Process Control Block
• Each process is represented by a Process
Control Block (PCB).
37
Process scheduling
• The objective of multiprogramming is to have some process
running at all times, to maximize CPU utilization.
• As processes enter the system, they are put into a ready queue
to wait for execution. This queue is generally stored as a linked
list.
38
Process scheduling
• There are also other queues in the system, When a process is
allocated the CPU, it executes for awhile and eventually quits, is
interrupted, or waits for the occurrence of a particular event, such as
the completion of an I/O request.
• Since there are many processes in the system, the disk may be busy
with the I/O request of some other process. The process therefore
may be have to wait for the disk.
40
Process scheduling
• The CPU scheduler selects from among the processes that
are ready to execute, and allocates the CPU to one of them.
41
I/O-bound and CPU-bound
• The success of CPU scheduling depends on the following
observed property of processes: Process execution
consists of a cycle of CPU execution and I/O wait. There
is a large number of short CPU bursts, and there is
small number of long CPU bursts.
• An I/O-bound process is one that spends more of its time
doing I/O than it spends doing computations. It would
typically have many very short CPU bursts.
• A CPU-bound process, on the other hand, is one that
generates I/O requests infrequently, using more time
doing computation. It might have a few very long CPU
bursts.
42
I/O-bound and CPU-bound
• A good mix of I/O-bound and CPU-bound processes will
increase the overall efficiency, even the scheduler has
wasted some CPU time.
43
I/O-bound and CPU-bound
• If the CPU-bound process get the CPU and hold it, all other processes which
finish their I/O will move into the ready queue to wait.
• While all I/O-bound processes are wait, the I/O devices are idle.
• Eventually, the CPU-bound process finishes its CPU burst and moves to an I/O
devices, all the I/O-bound processes, which have very short CPU bursts, execute
quickly and move back to I/O queues.
45
Criteria for comparing scheduling
algorithms
• Waiting time: the CPU algorithm does not affect the
amount of time during which a process executes or does
I/O; it affects only the amount of time waiting in the
ready queue. The shorter the better.
• Response time: in an interactive system, turnaround
time may not be the best criteria. Often, a process can
produce some output fairly early, and can continue
computing new results while previous results are being
output to the user. Thus, another measure is the time
from the submission of a request until the first response
is produced. The shorter the better.
46
Scheduling algorithms
First-come-first-serve (FCFS)
• First-come-first-serve (FCFS): the average waiting time is quite long. Consider the following set
of processes that arrive at T0, with the length of the CPU-burst time given as: P1: 24ms, P2: 3ms, P3:
3ms. If the processes arrive in the order P1, P2, P3, we get the result shown in the following Gantt
chart:
P1 P2 P3
0 24 27 30
• The waiting time are: P1: 0ms, P2: 24ms, P3: 27ms, average waiting time is 17ms.
• If the processes arrive in the order of P2, P3, P1, the average waiting time is 3ms only.
47
Scheduling algorithms
First-come-first-serve (FCFS)
• As an example, consider the following four processes and the gantt chart, with the
length of the CPU-burst time given in ms:
P1 P2 P3 P4
0 8 12 21 26
49
Scheduling algorithms
Shortest-Job-First (SJF)
• As an example, consider the following four processes and the gantt
chart, with the length of the CPU-burst time given in ms:
P1 P2 P4 P1 P3
0 1 5 10 17 26
50
Scheduling algorithms
Shortest-Job-First (SJF)
• As an example, consider the following four processes and the gantt chart, with the
length of the CPU-burst time given in ms:
P1 P2 P4 P1 P3
0 1 5 10 17 26
51
Scheduling algorithms: Priority
scheduling
52
Scheduling algorithms- Priority
scheduling
• Priorities can be defined either internally (calculated by the CPU
scheduler based on time limits, memory requirements, ratio of
I/O burst, etc.) or externally (assigned by the operating system
when the process is created according to the importance of the
process
53
Scheduling algorithms- Round-Robin
Scheduling (RR)
• Round-Robin Scheduling (RR): it is designed especially for time-sharing
systems. It is similar to FCFS, but preemption is added to switch between
processes.
• 3) terminated. 54
Scheduling algorithms
Round-Robin Scheduling (RR)
• As an example, consider the following four processes and the gantt chart, with the
length of the CPU-burst time given in ms:
P2 1 4
P3 2 9
P4 3 5
Round Robin, quantum = 4, no priority-based preemption
P1 P2 P3 P4 P1 P3 P4 P3
0 4 8 12 16 20 24 25 26
55
Scheduling algorithms: Multilevel
Feedback Queue Scheduling
• Multilevel Feedback Queue Scheduling: this kind of scheduling
allows a process to move between separate queues.
56
Scheduling algorithms
Multilevel Feedback Queue Scheduling
57
Scheduling algorithms - Remark
59
Process communication
• The concurrent processes executing in the operating system
may be either independent processes or cooperating
processes.
• Any process that does not share any data with any other
process is independent.
• On the other hand, a process is cooperating if it can affect or
be affected by the other processes executing in the system.
• Reasons for providing an environment that allows process
cooperation:
• Information sharing: since several users may be interested
in the same piece of information in a shared-memory
environment.
• Computation speedup: we break a task into subtasks, each
of which will be executing in parallel with the others can run
the task faster.
60
Process communication
• Besides the shared-memory environment, another way for
cooperating processes to communicate with each other is the
interprocess-communication (IPC) facility.
• IPC provides a message-passing system to allow processes to
communicate and to synchronize their actions.An IPC facility
provides at least two operations:
• send(Q,message) and receive(P,message).
• Messages sent by a process can be of either fixed or variable
size. If process P and Q want to communicate, they must send
messages to and receive messages from each other;
• a communication link must exist between them. The link has
several physical implementation, such as shared memory,
hardware bus, telephone lines & modems, or network cable.
61
Deadlock
• In a multiprogramming environment, several processes may
compete for a finite number of resources (e.g.. files, printers,
memory blocks).
62
Deadlock
• For example,
• Process P1 requests filea and fileb while process P2 requests filea and
fileb too;
• P1 hold filea and continual requests fileb from OS while P2 hold fileb
and continuely requests filea from OS;
• Both of P1 and P2 wait forever and the deadlock exist.
63
Memory
Management
Management of Real Memory
• Logical vs. Physical Address Space
12000
Relocation register
13352
Memory unit
65
Management of Real Memory
• Memory Management Unit – Hardware device that maps virtual
address to physical address.
CPU Registers
Accsess time
increases
Cache
capacity increases
Main Memory
cost decreases
Secondary (swap)
storage
66
Management of Real Memory
• Any OS that supports more than one user at a time must provide a
mechanism for dividing central memory among the concurrent
processes.
• Level of multiprogramming is limited only by the number of jobs that
can fit into central memory.
• Many multiprogramming and multiprocessing systems divide
memory into partitions, with each process being assigned to a
different partition.
• Fixed partitions: predefined in size and position
• Variable partitions: allocated dynamically according to the
requirements of the jobs being executed
67
Fixed Partition Example
68
Strategy of Fixed Partition
• Allocation scheme:
• load each incoming job into the smallest free partition in which
it will fit.
• Initial selection of the partition sizes:
• Considerations:
• There must be enough large partitions so that large jobs
can be run without too much delay.
• If there are too many large partitions, a great deal of
memory may be wasted when small jobs are running.
• Scheme:
• Tailor a set of partitions to the expected population of job
sizes
69
Variable Partition Example
70
Strategy of Variable Partition
• Allocation scheme:
• For each job to be loaded, OS allocates, from the free
memory areas, a new partition of exactly the size required.
• OS must keep track of allocated and free areas of
memory, usually by maintaining a linked list of free
memory areas.
• This list is scanned when a new partition is to be
allocated.
• First-fit allocation: the first free area in which it will fit
• Best-fit allocation: the smallest free area in which it will
fit
• When a partition is released, its assigned memory is returned to
the free list and combined with adjacent free areas.
71
Memory Protection
• Memory protection:
• When a job is running in one partition, it must be prevented
from reading and writing memory locations in any other
partition or in the OS.
• Approaches (hardware support is necessary)
• Using a pair of bounds registers that contain the beginning and
ending addresses of a job’s partition
• OS sets the bounds registers (in supervisor mode) when a
partition is assigned to a new job.
• The values in theses registers are automatically saved and
restored during context switching.
• For every memory reference, the hardware automatically
checks the referenced address against the bounds registers.
• Using storage protection key
72
Memory Fragmentation
• Memory fragmentation occurs when the available free memory
is split into several separate blocks, with each block being too
small to be of use.
• External fragmentation – total memory space exists to
satisfy a request, but it is not contiguous.
• Internal fragmentation – allocated memory may be
slightly larger than requested memory; this size difference is
memory internal to a partition, but not being used.
73
Memory Fragmentation
• One possible solution: relocatable partitions
• Cons:
74
Relocatable Partition Example
75
Relocatable Partitions
• Practical implementation of relocatable partitions requires some hardware
support: a special relocation register containing the beginning address of the
program currently being executed.
• The value of this register is modified when the process is moved to a new
location.
• This register is automatically saved and restored during context switching.
• The value of this register is automatically added to the address for every
memory reference made by the user program.
76
Basic Concept of Virtual
Memory
• A virtual resource is one that appears to a user program to
have characteristics that are different from those of the actual
implementation of the resource.
• User programs are allowed to use a large contiguous virtual
memory, or virtual address space.
• Virtual memory
• is stored on some external device, the backing store.
• may even be larger than the total amount of real memory
available on the computer.
• can increase the level of multiprogramming because only
portions of virtual memory are mapped into real memory as
they are needed.
77
Virtual Memory
• Division
real real Virtual
single user
Real storage multiprogramming
dedicated Virtual storage Multiprogramming systems
systems
system
variable
fixed partition pure combined paging
multiprogramming
multiprogram pure paging segmentation segmentation
ming
absolute Relocatable
78
Virtual Memory
• Characteristics of Programs- Programs have code used only in
unusual situations, e.g., error management.
• Arrays, lists, and tables are allocated more memory than needed.
79
Demand Paging
• Demand paging
• One common method for implementing virtual memory.
• Demand Paging refers to a technique where program pages are
loaded from disk into memory as they are referenced.
• Virtual memory of a process is divided into pages of some fixed
length.
• Real memory of the computer is divided into page frames of the
same length as the pages.
• Mapping of pages onto page frames is described by a page map
table (PMT).
• PMT is used by the hardware to convert addresses in a
program’s virtual memory into the corresponding addresses in
real memory.
• This address conversion is known as dynamic address
translation.
80
Demand Paging
Virtual vs. Physical memory
81
Pure Paging
• Pager brings in individual pages into memory
82
Pure Paging (cont.)
• If there is ever a reference to a page, first reference will trap to
OS) page fault.
• OS looks at another table to decide:
83
Pure Paging (cont.)
• What happens if there is no free frame?
• Algorithm ( How)
84
Page Selection for Removal
• Strategies:
• Least recently used (LRU) method
• Keep records of when each page is memory was last referenced
and replace the page that has been unused for the longest time.
• Overhead for this kind of record keeping can be high, simpler
approximations to LRU are often used.
• Working set
• Determine the set of pages that are frequently used by the
process in question.
• The systems attempt to replace pages in such a way that each
process always has its working set in memory. 85
Implementation of Page Tables
• Through dedicated register.
• Good strategy for small page table <256 entries but not
satisfactory for large sizes e.g. million entries.
• Page table is kept in main memory.
• Through page table registers:
• Page-table base register (PTBR) points to the page table in
memory.
• Page-table length register (PTLR) indicates size of the page
table- checked against every logical address to validate the
address- failure results in trap to the OS.
• Through Associative register :
• Parallel Search
86
Demand-Paging Systems
• Advantages:
• Efficient memory utilization
• Avoid most of the wasted memory due to fragmentation associated with
partitioning schemes.
• Parts of a program that are not used during a particular execution need
not be loaded.
• Disadvantages:
• Vulnerable to thrashing problem:
• The computing system spends most of its time swapping pages but not
doing useful work.
• Consider a case:
• Memory reference: 1 sec
• Fetch a page from the backing store: 10000 sec
• Page fault rate: 1%
• Only about 1% of its time is for useful work.
87
Locality of Reference
• To avoid thrashing, page fault rate has to be much lower.
88
File
management
File-System Interface
90
File Concept
• A file is a named collection of related information that is
recorded on secondary storage.
91
File Concept
• Many different types of information may be stored in a file – source
programs, object programs, executable programs, numeric data, text,
payroll records, graphic images, sound recordings, and so on.
• A text file is a sequence of subroutines and functions, each of
which is further organized as declaration followed by executable
statements.
• An object file is a sequence of bytes organized into blocks
understandable by the system’s linker.
• An executable file is a series of code sections that the loader can
bring into memory and execute.
92
File Attributes
• A file has certain attributes, which vary from one operating system to another,
but typically consist of these:
• Name – only information kept in human- readable form.
• Time, date, and user identification – data for protection, security, and
usage monitoring.
• Information about all files are kept in the directory structure, which is
maintained on the disk.
93
File Operations
• A file is an Abstract Data Type. To define a file properly, we
need to consider the operations that can be performed on files.
• Create
• Write
• Read
• Reposition within File (or File Seek)
• Delete
• Truncate
• Open(Fi) – search the directory structure on disk for
entry Fi , and move the content of entry to memory.
• Close(Fi) – move the content of entry Fi in memory to
directory structure on disk.
94
Access Methods
95
SEQUENTIAL ACCESS
• Writes allocate space for the record and move the pointer to the
new End Of File.
96
DIRECT ACCESS
• Method useful for disks.
• disk location.
97
OTHER ACCESS METHODS
• Other access methods can be built on top of direct access
method.
• To find a record in the file, we first search the index, and then use
the pointer to access the file directly and to find the desired
record. 98
OTHER ACCESS METHODS
Example 1:
102
Protection
• Types of Access
103
Protection
• Mode of Access: Read(R), Write(W), Execute(X)
• Ask manager to create a group (unique name), say G,
and add some users to the group.
• For a particular file (say game) or subdirectory, define
an appropriate access.
• Three classes of users
R W X
Owner Access 7 1 1 1
Groups Access 6 1 1 0
Public Access 1 0 0 1
104
File-System Implementation
• To provide an efficient and convenient access to the disk, the
operating system imposes one or more file system to allow the
data to be stored, located and retrieved easily.
• The first problem is defining how the file system should look
to the user. This task involves defining a file and its
attributes, the operations allowed on a file, and the directory
structure for organizing files.
• The second problem is creating algorithms and data
structures to map the logical file system onto the physical
secondary-storage devices.
105
File-System Implementation
• The file system itself is Application Programs
generally composed of many
different levels. Logical File System
Device
106
File-System Implementation
• The lowest level, the I/O control, consists of device drivers
and interrupt handlers to transfer information between the
main memory and the disk system.
108
File-System Implementation (Cont.)
109
File-System Implementation (Cont.)
110
Allocation Methods
• The direct-access nature of disks allows us flexibility in the
implementation of files. In almost every case, many files will be
stored on the same disk.
111
CONTIGUOUS ALLOCATION
• The contiguous-allocation method
requires each file to occupy a set of
contiguous blocks on the disk.
113
LINKED ALLOCATION
• At file creation time, simply tell the directory about the file.
When writing, get a free block and write to it, enqueueing it
to the file header.
114
LINKED ALLOCATION
Pointers use up space in each block.
• Method suffers from wasted space since, for small files, most of
the index block is wasted. What is the optimum size of an index
block?