Unit 3 Os
Unit 3 Os
Unit 3 os notes
1.1.1 Process:
In the OS, each process is represented by its PCB (Process Control Block). The PCB, generally
contains the following information:
• Process State: The state may be new, ready, running, waiting, halted, and so on.
• Process ID : each process has a unique ID.
• Program Counter (PC) value: The counter indicates the address of the next instruction to be
executed or this process.
• Register values: The registers vary in number and type, depending on the computer
architecture. They include accumulators, index registers, stack pointers, and general-purpose
registers, plus any condition-code information. Along with the program counter, this state
information must be saved when an interrupt occurs, to allow the process
to be continued correctly afterward.
• Memory Management Information (page tables, base/bound
registers etc.):
• Processor Scheduling Information ( priority, last processor burst time
etc.)
• I/O Status Info (outstanding I/O requests, I/O devices held, etc.)
• List of Open Files
• Accounting Info: This information includes the amount of CPU and
real time used, time limits, account numbers, job or process numbers, and so on.
A. Process Creation
n Parent process create children processes, which, in turn create other processes, forming a
tree of processes
n Resource sharing
n Execution
B. Process Termination
n Process executes last statement and asks the operating system to delete it (exit)
l If parent is exiting
• The process address space consists of the linear address range presented to each process.
Each process is given a flat 32- or 64-bit address space, with the size depending on the
architecture. The term "flat" describes the fact that the address space exists in a single
range. (As an example, a 32-bit address space extends from the address 0 to 429496729.)
• Some operating systems provide a segmented address space, with addresses existing not
in a single linear range, but instead in multiple segments. Modern virtual memory
operating systems generally have a flat memory model and not a segmented one.
• A memory address is a given value within the address space, such as 4021f000. The
process can access a memory address only in a valid memory area. Memory areas have
associated permissions, such as readable, writable, and executable, that the associated
process must respect. If a process accesses a memory address not in a valid memory area,
or if it accesses a valid area in an invalid manner, the kernel kills the process with the
dreaded "Segmentation Fault" message.
o A memory map of the executable file's code, called the text section.
o A memory map of the executable file's initialized global variables, called the data
section.
o A memory map of the zero page (a page consisting of all zeros, used for purposes
such as this) containing uninitialized global variables, called the bss section
o A memory map of the zero page used for the process's user-space stack (do not
confuse this with the process's kernel stack, which is separate and maintained and
used by the kernel)
o An additional text, data, and bss section for each shared library, such as the C
library and dynamic linker, loaded into the process's address space.
o Any memory mapped files
o Any shared memory segments
o Any anonymous memory mappings, such as those associated with malloc().
1.2 Threads:
Threads may be provided either at the user level, for user threads, or by the kernel, f or
kernel threads. User threads are supported above the kernel and are managed without kernel
support, whereas kernel threads are supported and managed directly by the operating system.
There must exist a relationship between user threads and kernel threads. There are three common
ways of establishing this relationship.
A. Many-to-One Model:
B. One-to-One Model:
C. Many-to-Many Model:
The many-to-many model multiplexes many user-level threads to a smaller or equal number of
kernel threads. The one-to-one model allows for greater concurrency.
The main objective of CPU Scheduling is to maximize CPU utilization. Basically we use
process scheduling to maximize CPU utilization. Process Scheduling is done by following
ways:
terminate execution.
1.3.2 Scheduling Queue: queue is generally stored as a linked list. A ready-queue header
contains pointers to the first and final PCBs in the list. Each PCB includes a pointer field that
points to the next PCB in the ready queue.
Job queue – set of all processes in the system
Ready queue – set of all processes residing in main memory, ready and waiting to
execute
1.3.3 Schedulers: A process migrates among the various scheduling queues throughout its
lifetime. The operating system must select, for scheduling purposes, processes from these queues
in some fashion. The selection process is carried out by the appropriate scheduler. Schedulers
have two types:
1. Long Term Scheduler 2. Short Term Scheduler
A. selects which process should be brought A. selects which process should be executed
into the ready queue. next & allocates CPU.
B. L.T.S. is invoked very infrequently B. STS is invoked very frequently
(seconds, minutes). (milliseconds).
C. L.T.S. controls the degree of
multiprogramming (number of processes in
memory)
Medium-Term Scheduler:
Some operating systems, such as time-sharing systems, may introduce an additional,
intermediate level of scheduling. medium-term scheduler is that sometimes it can be
advantageous to remove processes from memory (and from active contention for the CPU) and
thus reduce the degree of multiprogramming. Later, the process can be reintroduced into
memory, and its execution can be continued where it left off. This scheme is called swapping.
1.3.4 Dispatcher:
Dispatcher is the module that gives control of the CPU to the process selected by the short -term
scheduler. This function involves:
• Switching Context
o When CPU switches to another process, the system must save the state of the old
process and load the saved state for the new process.
o Context-switch time is overhead; the system does no useful work while switching.
o Time dependent on hardware support.
• Switching to user mode
• Jumping to the proper location in the user program to restart that program.
The dispatcher should be as fast as possible, given that it is invoked during every process switch.
The time it takes for the dispatcher to stop one process and start another running is known as
dispatch latency.
Ans. Preemptive scheduling: The preemptive scheduling is prioritized. The highest priority
process should always be the process that is currently utilized.
Non-Preemptive scheduling: When a process enters the state of running, the state of that
process is not deleted from the scheduler until it finishes its service time.
• CPU Utilization: We want to keep the CPU as busy as possible. Conceptually, CPU
utilization can range from 0 to 100 percent. In a real system, it should range from 40
percent (for a lightly loaded system) to 90 percent (for a heavily used system).
Processor Utilization = (Processor Busy Time / (Processor Busy Time + Processor Idle time))*100
• Throughput: the number of processes that are completed per time unit, called
throughput.
• Waiting Time: the amount of time that a process spends waiting in the ready queue.
Waiting Time = Turnaround Time – Processing Time
• Response Time: time from the submission of a request until the first response is
produced. This measure, called response time.
Response Time = T(First Response) – T(submission of request)
➢ Optimization Criteria:
✓ Max CPU utilization
✓ Max throughput
✓ Min turnaround time
✓ Min waiting time
✓ Min response time
• Associate with each process the length of its next CPU burst. Use these lengths to
schedule the process with the shortest time
• Two schemes:
i. nonpreemptive – once CPU given to the process it cannot be preempted until
completes its CPU burst
ii. preemptive – if a new process arrives with CPU burst length less than remaining
time of current executing process, preempt. This scheme is known as the
Shortest-Remaining-Time-First (SRTF)
• SJF is optimal – gives minimum average waiting time for a given set of processes
Example:
Process p1,p2,p3,p4 having burst time of 6,8,7,3 microseconds. Draw Gantt Chart & Calculate
Average Turn Around Time, Average Waiting Time, CPU Utilization & Throughput using SJF.
Processes Burst Time T.A.T. W.T.
T(P.C.)-T(P.S.) TAT- T(Proc.)
P4 3 3-0=3 3-3=0
P1 6 9-0=9 9-6=3
P3 7 16-0=16 16-7=9
P2 8 24-0=24 24-8=16
GANTT CHART
P4 P1 P3 P2
0 3 9 16 24
Average T.A.T. =(3+9+16+24)/4 = 13 microsecond
Average W.T. = (0+3+9+16)/4 =28/4 = 7 microsecond
CPU Utilization = (24/24)*100 = 100%
Throughput = 4/24
C. Priority Scheduling:
• The CPU is allocated to the process with the highest priority (smallest integer highest
priority)
• Problem Starvation – low priority processes may never execute
• Solution Aging – as time progresses increase the priority of the process
GANTT CHART:
P2 P5 P1 P3 P4
0 1 6 16 18 19
D. Round-Robin Scheduling:
• Each process gets a small unit of CPU time (time quantum), usually 10-100 milliseconds.
After this time has elapsed, the process is preempted and added to the end of the ready
queue.
• If there are n processes in the ready queue and the time quantum is q, then each process
gets 1/n of the CPU time in chunks of at most q time units at once. No process waits
more than (n-1)q time units.
• Used for time sharing & multiuser O.S.
Example:
Process p1,p2,p3 having processing time of 24,3,3 milliseconds. Draw Gantt Chart &
Calculate Average Turn Around Time, Average Waiting Time, CPU Utilization & Throughput
using Round Robin with time slice of 4milliseconds.
GANTT CHART
P1 P2 P3 P1 P1 P1 P1 P1
0 4 7 10 14 18 22 26 30
5. Example: in a MFQ scheduler with 3 queues, the scheduler first executes all the
processes in queue 0. only when queue 0 is empty, it will execute processes in queue 1.
Similarly processes in queue 2 will be executed only if queues 0 and 1 are empty.
6. Three queues:
Q0 – time quantum 8 milliseconds(RR)
Q1 – time quantum 16 milliseconds(RR)
Q2 – FCFS
7. Scheduling
1. A new job enters queue Q0. When it gains CPU, job receives 8 milliseconds. If it
does not finish in 8 milliseconds, job is moved to tail of queue Q1.
2. If queue Q0 is empty, the process at the head of Q1 is given a quantum of 16
additional milliseconds. If it still does not complete, it is preempted and moved to
queue Q2.Processes in queue 2 are run on an FCFS basis only when queue Q 0 and
Q1 are empty
Multilevel-feedback-queue scheduler defined by the following parameters:
1. number of queues
2. scheduling algorithms for each queue
3. method used to determine when to upgrade a process
4. method used to determine when to demote a process
5. method used to determine which queue a process will enter when that process
needs service
Multiple-Processor Scheduling:
CPU scheduling more complex when multiple CPUs are available.
1. Homogeneous multiprocessor system : Processors are identical in terms of
functionality; any available processor can be used to run any process in the queue. Load
sharing can be done. In order to avoid unbalancing, we can allow all processes to go to
one queue and are scheduled onto available processor.
Scheduling approach in Homogeneous MP
Two scheduling approach are used.
a) Self scheduling:
Each processor is self scheduling. Each process examines the ready queue
and selects a process to execute.
Problem arises when more than one processor access the same process from ready
queue. Different techniques are used to resolve the problem.
b) Master - Slave structure :
In this approach, one processor can be appointed as scheduler for
the other processors. The other processor only executes user code.