Gurpreet Singh RA1805 Roll No 17
Gurpreet Singh RA1805 Roll No 17
Gurpreet Singh RA1805 Roll No 17
PROBLEM
-2
SUBJECT CODE: CSE 366
SUBMITTED BY:
SUBMITTED TO:
Name: Gurpreet Singh Lec. Mr.
Ramandeep Singh
Roll No: 17
Department of CSE
SERIAL
NO. CONTENTS PAGE
NO:
1. CPU SCHEDULING 8
2. BRIEF INTRODUCTION 8- 9
3. VARIOUS TYPES OF 9- 12
OPERATING SYSTEM
SCHEDULERS.
a)Long Term Scheduler
PREEMPTIVE
SCHEDULING.
6.1Types Of Preemptive
Scheduling.
a) Round Robin
b) SRT
c) Priority based preemptive
6.2 Types of Non Preemptive
Scheduling.
a) Fifo
b)Priority Based Non
Preemptive.
c) SJF (SHORTEST JOB FIRST)
7. MULTILEVEL 34- 35
FEEDBACK QUEUE
SCHEDULING.
8. PROS AND CONS OF 35- 40
DIFFERENT SCHEDULING
ALGORITHMS.
8.1 FCFS
8.2 SJF
8.3 FIXED PRIORITY BASED
PREEMPTIVE.
8.4 ROUND ROBIN SCHEDULING.
8.5 MULTILEVEL FEEDBACK
QUEUE SCHEDULING.
9. HOW TO CHOOSE 32- 36
SCHEDULING
ALGORITHMS.
10. OPERATING SYSTEM 40- 41
SCHEDULER
IMPLEMENTATION.
10.1 WINDOWS
10.2 MAC –OS
10.3 LINUX
10.4 FREEBSD
10.5 NETBSD
10.6 SOLARIS
SUMMARY.
11. COMPARISON BETWEEN 41-63
OS SCHEDULERS.
11.1 solaris-2 scheduling.
11.2 windows scheduling.
11.3 linux scheduling.
11.4 symmetric multiprocessing
in XP.
11.5 COMPARISON.
11.6 DIAGRAMETICAL
REPRESENTATION.
12. MEMORY MANAGEMENT. 63- 85
12.1 INTRODUCTION.
a) Requirement.
b)Relocation.
c) Protection.
d)Sharing.
e) Logical organization
f) Physical organization
12.2 DOS MEMORY MANAGER.
12.3 MAC MEMORY MANAGERS.
12.4 MEMORY MANAGEMENT IN
WINDOWS
12.5 MEMORY MANAGEMENT IN
LINUX
12.6 VIRTUAL MEMORY AREAS.
12.7 MAC OS MEMORY
MANAGEMENT.
12.8 FRAGMENTATION.
12.9 SWITCHER.
TYPES OF OPERATING
SYSTEM SCHEDULERS
Operating systems may feature up to 3 distinct types of
schedulers:
• Long-term scheduler .
• Short-term scheduler.
EXPLANATION
1. Long-term scheduler
The long-term, or admission, scheduler decides which jobs or
processes are to be admitted to the ready queue,that is, when
an attempt is made to execute a program, its admission to the set
of currently executing processes is either authorized or delayed by
the long-term scheduler.
2. Mid-term scheduler
The mid-term scheduler temporarily removes processes from
main memory and places them on secondary memory (such
as a disk drive) or vice versa. This is commonly referred to as
"swapping out" or "swapping in" (also incorrectly as
"paging out" or "paging in").
3. Short-term scheduler
The short-term scheduler (also known as the CPU scheduler)
decides which of the ready, in-memory processes are to be
executed (allocated a CPU) next following a clock interrupt,
an IO interrupt, an operating system call or another form
of signal.
Dispatcher
Another component involved in the CPU-scheduling function
is the dispatcher. The dispatcher is the module that gives control
of the CPU to the process selected by the short-term scheduler.
• Switching context
SCHEDULING CRITERIA
Different CPU scheduling algorithms have different
properties, and the choice of a particular algorithm may favor one
class of processes over another. In choosing which algorithm to
use in a particular situation, we must consider the
properties of the various algorithms. Many criteria have been
suggested for comparing CPU scheduling algorithms. Which
characteristics are used for comparison can make a substantial
difference in which algorithm is judged to be best.
SCHEDULING ALGORITHM
A multiprogramming operating system allows more than one
process to be loaded into the executable memory at a time
and for the loaded process to share the CPU using time-
multiplexing. Part of the reason for using multiprogramming is that
the operating system itself is implemented as one or more
processes, so there must be a way for the operating system and
application processes to share the CPU. Another main reason is
the need for processes to perform I/O operations in the
normal course of computation. Since I/O operations
ordinarily require orders of magnitude more time to
complete than do CPU instructions, multiprogramming
systems allocate the CPU to another process whenever a
process invokes an I/O operation.
GOALS FOR SCHEDULING
Make sure your scheduling strategy is good enough with the
following criteria:
Context Switching
Typically there are several tasks to perform in a computer
system.
So if one task requires some I/O operation, you want to initiate the
I/O operation and go on to the next task. You will come back to it
later.
When you return back to a process, you should resume where you
left off. For all practical purposes, this process should never know
there was a switch, and it should look like this was the only process
in the system.
Context of a process
• Program Counter.
• Stack Pointer.
• Registers.
All this information is usually stored in a structure called Process Control Block
(PCB).
Non-Pre-emptive Vs
Preemptive Scheduling
• Non-Preemptive: Non-preemptive algorithms are designed
so that once a process enters the running state(is allowed a
process), it is not removed from the processor until it has
completed its service time ( or it explicitly yields the processor).
=19.375+ (42)/
8=15.3+5.25=24.62 ms
OR ( method-2)
=24.625 ms
SRT
Y Z X y
0 1 4 8
13
Solution:
The Gantt chart is drawn as below:
=13 +(42)/5=13+8.4=21.4 ms
Solution:
typically a=1/2.
Limitation of SJF:
The difficulty is knowing the length of the next CPU request
=11.8+ (42)/5=11.8+8.4=20.2 ms
Comments: SJF is proven optimal only when all jobs are available
simultaneously.
• # of queues.
• When to demote.
Round-robin scheduling
The scheduler assigns a fixed time unit per process, and cycles
through them.
Overview
Scheduling
CPU Utilizatio
ThroughputTurn-aroundResponseDeadlineStarvation
algorithm n
time time handling free
Windows
Very early MS-DOS and Microsoft Windows systems were non-
multitasking, and as such did not feature a scheduler. Windows
3.1x used a non-preemptive scheduler, meaning that it did not
interrupt programs. It relied on the program to end or tell the OS
that it didn't need the processor so that it could move on to another
process. This is usually called cooperative multitasking. Windows 95
introduced a rudimentary preemptive scheduler; however, for
legacy support opted to let 16 bit applications run without
preemption.
Mac OS
Mac OS 9 uses cooperative scheduling for threads, where one
process controls multiple cooperative threads, and also provides
preemptive scheduling for MP tasks. The kernel schedules MP
tasks using a preemptive scheduling algorithm. All Process
Manager Processes run within a special MP task, called the
"blue task". Those processes are scheduled cooperatively, using
a round-robin scheduling algorithm; a process yields control of the
processor to another process by explicitly calling a blocking function
such as WaitNextEvent. Each process has its own copy of the Thread
Manager that schedules that process's threads cooperatively; a
thread yields control of the processor to another thread by
calling YieldToAnyThread or YieldToThread.
Linux
From version 2.5 of the kernel to version 2.6, Linux used a multilevel
feedback queue with priority levels ranging from 0-140. 0-99 are
reserved for real-time tasks and 100-140 are considered nice task
levels. For real-time tasks, the time quantum for switching
processes is approximately 200 ms, and for nice tasks
approximately 10 ms. The scheduler will run through the queue of
all ready processes, letting the highest priority processes go first
and run through their time slices, after which they will be placed in
an expired queue. When the active queue is empty the expired
queue will become the active queue and vice versa. From versions
2.6 to 2.6.23, the kernel used an O(1) scheduler. In version 2.6.23,
they replaced this method with the Completely Fair Scheduler that
uses red-black treesinstead of queues.
FreeBSD
FreeBSD uses a multilevel feedback queue with priorities ranging
from 0-255. 0-63 are reserved for interrupts, 64-127 for the top half
of the kernel, 128-159 for real-time user threads, 160-223 for time-
shared user threads, and 224-255 for idle user threads. Also, like
Linux, it uses the active queue setup, but it also has an idle queue.
NetBSD
NetBSD uses a multilevel feedback queue with priorities ranging
from 0-223. 0-63 are reserved for time-shared threads (default,
SCHED_OTHER policy), 64-95 for user threads which entered kernel
space, 96-128 for kernel threads, 128-191 for user real-time threads
(SCHED_FIFO and SCHED_RR policies), and 192-223 for software
interrupts.
Solaris
Solaris uses a multilevel feedback queue with priorities ranging from
0-169. 0-59 are reserved for time-shared threads, 60-99 for system
threads, 100-159 for real-time threads, and 160-169 for low priority
interrupts. Unlike Linux, when a process is done using its time
quantum, it's given a new priority and put back in the queue.
SUMMARY:
Operating SystemPreemptio Algorithm
n
1. Solaris 2 Scheduling
• Priority-based process scheduling
• The scheduling policy for the system class does not time-
slice
• The selected thread runs on the CPU until it blocks, uses its
time slices, or is preempted by a higher-priority thread
2. Windows scheduling
Overview: It Displays all context switches by CPU, as shown in the
following screen shot.
Screen shot of a graph showing CPU scheduling zoomed to 500
microseconds
Graph Description:
Shows all context switches for a time interval aggregated
by CPU. A tooltip displays detailed information on the
context switch including the call stack for the new thread.
Further information on the call stacks is available through
the summary tables. Summary tables are accessed by right
clicking from the graph and choosing Summary Table.
Graph Description:
This graph displays the percentage of the total CPU
resource each processor spends servicing device interrupts.
Noticable points:
• Priority-based preemptive scheduling
• Lower (not below base priority) when its time quantum runs
out
• Real-time scheduling.
• Two real-time scheduling classes: FCFS (non-preemptive)
and RR (preemptive).
Symmetric multiprocessing in XP
COMPARISON
1) Solaris 2 Uses priority-based process scheduling.
2) Windows 2000 uses a priority-based preemptive scheduling
algorithm.
3) Linux provides two separate process-scheduling algorithms: one is
designed for time-sharing processes for fair preemptive scheduling
among multiple processes; the other designed for real-time tasks.
a) For processes in the time-sharing class Linux uses a prioritized
credit-based algorithm.
b) Real-time scheduling: Linux implements two real-time scheduling
classes namely FCFS (First come first serve) and RR (Round Robin).
DIAGRAMETICAL REPRESENTATION
Solaris Scheduling
Windows XP Scheduling
Linux Scheduling
INTRODUCTION
Memory management is the act of managing computer
memory. In its simpler forms, this involves providing ways to
allocate portions of memory to programs at their request,
and freeing it for reuse when no longer needed. The
management of main memory is critical to the computer
system.
Relocation
In systems with virtual memory, programs in memory must be able
to reside in different parts of the memory at different times. This is
because when the program is swapped back into memory after
being swapped out for a while it can not always be placed in the
same location. The virtual memory management unit must also deal
with concurrency. Memory management in the operating system
should therefore be able to relocate programs in memory and
handle memory references and addresses in the code of the
program so that they always point to the right location in memory.
Protection
Processes should not be able to reference the memory for another
process without permission. This is called memory protection, and
prevents malicious or malfunctioning code in one program from
interfering with the operation of other running programs.
Sharing
Even though the memory for different processes is normally
protected from each other, different processes sometimes need to
be able to share information and therefore access the same part of
memory. Shared memory is one of the fastest techniques for Inter-
process communication.
Logical organization
Programs are often organized in modules. Some of these modules
could be shared between different programs, some are read only
and some contain data that can be modified. The memory
management is responsible for handling this logical organization
that is different from the physical linear address space. One way to
arrange this organization is segmentation.
Physical Organization
Memory is usually divided into fast primary storage and
slow secondary storage. Memory management in the operating
system handles moving information between these two levels of
memory.
In any program you write, you must ensure that you manage
resources effectively and efficiently. One such resource is your
program’s memory. In an Objective-C program, you must make sure
that objects you create are disposed of when you no longer need
them.
Memory Management in
WINDOWS
This is one of three related technical articles—"Managing
Virtual Memory," "Managing Memory-Mapped Files," and
"Managing Heap Memory"—that explain how to manage memory
in applications for Windows.
Address Types
Linux is, of course, a virtual memory system, meaning that
the addresses seen by user programs do not directly
correspond to the physical addresses used by the hardware.
Virtual memory introduces a layer of indirection that allows a
number of nice things. With virtual memory, programs running
on the system can allocate far more memory than is physically
available; indeed, even a single process can have a virtual address
space larger than the system's physical memory. Virtual memory
also allows the program to play a number of tricks with the
process's address space, including mapping the program's
memory to device memory.
Thus far, we have talked about virtual and physical addresses,
but a number of the details have been glossed over. The Linux
system deals with several types of addresses, each with its own
semantics. Unfortunately, the kernel code is not always very clear
on exactly which type of address is being used in each situation, so
the programmer must be careful.
Bus addresses
The addresses used between peripheral buses and memory. Often, they are the
same as the physical addresses used by the processor, but that is not
necessarily the case. Some architectures can provide an I/Omemory
management unit (IOMMU) that remaps addresses between a bus and main
memory.
For example: The 12 least-significant bits are the offset, and the
remaining, higher bits indicate the page number. If you discard the
offset and shift the rest of an offset to the right, the result is called
a page frame number (PFN). Shifting bits to convert between page
frame numbers and addresses is a fairly common operation; the
macro PAGE_SHIFT tells how many bits must be shifted to make this
conversion.
Mac OS memory
management
The original problem for the designers of the Macintosh was how to
make optimum use of the 128 KB of RAM that the machine was
equipped with.[1] Since at that time the machine could only run
one application program at a time, and there was
no fixed secondary storage, the designers implemented a simple
scheme which worked well with those particular constraints.
However, that design choice did not scale well with the development
of the machine, creating various difficulties for both programmers
and users.
FRAGMENTATION
The chief worry of the original designers appears to have
been fragmentation - that is, repeated allocation and
deallocation of memory through pointers leads to many
small isolated areas of memory which cannot be used
because they are too small, even though the total free
memory may be sufficient to satisfy a particular request for
memory.
This may inflate the amount of actual RAM being used by the
system. When RAM is needed, the system will swap or page out
those pieces not needed or not currently in use. It is
important to bear this in mind because a casual examination of
memory usage with the top command via the Terminal application
will reveal large amounts of RAM being used by applications. (The
Terminal application allows users to access the UNIX operating
system which is the foundation of Mac OS X.) When needed, the
system will dynamically allocate additional virtual memory so there
is no need for users try to tamper with how the system handles
additional memory needs. However, there is no substitute for having
additional physical RAM.
#include<conio.h>
void roundrobin();
void fifo();
void prioritynonpre();
void sjf();
void fcfs();
void lru();
int main()
int choice1,choice2,choice3,choice4,choice5;
while(1)
//clrscr();
scanf("%d",&choice1);
if(choice1 == 1)
//clrscr();
printf("\n\n Enter your choice:\n 1.For Pre-emptive\n
2.For Non-Preemptive\n 0.For To Exit \n Enter Your Choice:");
scanf("%d",&choice2);
if(choice2 == 1)
//clrscr();
scanf("%d",&choice3);
if(choice3 == 1)
roundrobin();
else if(choice3 == 0)
break;
else
getch();
}
else if(choice2 == 2)
//clrscr();
scanf("%d",&choice4);
if(choice4 == 1)
fifo();
else if(choice4 == 3)
sjf();
else if(choice4 == 0)
break;
else
else if(choice2 == 0)
break;
else
getch();
else if(choice1 == 2)
//clrscr();
scanf("%d",&choice5);
if(choice5 == 1)
fcfs();
}
else if(choice5 == 2)
lru();
else if(choice5 == 0)
break;
else
getch();
else if(choice1 == 0)
break;
else
getch();
return 0;
void sjf()
int burst[5],arrival[5],done[5],waiting[5];
int i,j,k,l=0,sum,total,min,max;
int temp;
sum = 0;
for(i=0;i<5;i++)
printf("\n\n\tProcess %d\n",i+1);
scanf("%d",&arrival[i]);
done[i] = 0;
waiting[i] = 0;
for(i=0;i<5;i++)
for(i=0;i<sum;)
min = sum;
for(j=0;j<5;j++)
min = burst[j];
l = j;
}
printf(" i = %d",i);
temp = i - arrival[l];
i = i + burst[l];
done[l] = 1;
waiting[l] = temp;
awt = 0.0F;
for(i=0;i<5;i++)
printf("\np%d = %d",i+1,waiting[i]);
awt = awt/5;
printf("\n\nAWT = %f",awt);
/* ****************************************8888
int burst[4],arrival[4],done[4],waiting[4];
int i,j,sum,min;
int temp;
sum = 0;
for(i=0;i<4;i++)
scanf("%d",&burst[i]);
scanf("%d",&arrival[i]);
done[i] = 0;
waiting[i] = 0;
for(i=0;i<4;i++)
for(i=0;i<=sum;i++)
printf(" i = %d ",i);
min = sum;
for(j=0;j<4;j++)
min = burst[j];
}
temp = i - arrival[j];
i = i + burst[j];
done[j] = 1;
waiting[j] = temp;
printf("\ntemp = %d",temp);
for(i=0;i<4;i++)
printf("\n\na(%d) = %d",i+1,waiting[i]);
}*/
getch();
void roundrobin()
int burst[4],arrival[4],lefttime[4],waiting[4],last[4];
int i,j,sum;
int gap=0;
float awt=0;
sum = 0;
// clrscr();
for(i=0;i<4;i++)
scanf("%d",&burst[i]);
scanf("%d",&arrival[i]);
lefttime[i] = 0;
waiting[i] = 0;
last[i] = 0;
scanf("%d",&gap);
for(i=0;i<4;i++)
//printf("\nsum = %d",sum);
for(i=0;i<4;i++)
lefttime[i] = burst[i];
}
j=0;
for(i=0;i<=sum;i++)
lefttime[j] = 0;
last[j] = i;
i = i + lefttime[j];
i = i + gap;
if(j<4)
j++;
else
j=0;
for(i=0;i<4;i++)
awt = awt/4;
printf("\n\n The average waiting time is %.3f",awt);
getch();
void lru()
int num,i,buf,page[100],buff[100],j,pagefault,rep,flag =
1,ind,abc[100];
int count;
int l,k,fla;
// clrscr();
scanf("%d",&num);
scanf("%d",&page[i]);
scanf("%d",&buf);
for(j=0;j<buf;j++)
buff[j] = 0;
pagefault = 0;
flag = 1;
count = 0;
k = 0;
for(i=0;i<num;i++)
flag = 1;
for(j=0;j<buf;j++)
if(buff[j] == page[i])
flag = 0;
break;
j = 0;
if(flag == 0)
continue;
else
{
buff[k] = page[i];
k++;
pagefault++;
for(l=0;l<buf;l++)
if(buff[l] != 0)
printf(" %d",buff[l]);
continue;
count = 0;
fla = 1;
for(j=0;j<buf;j++)
{
abc[j] = 0;
for(l=i;l>=0;l++)
for(j=buf-1;j>=0;j--)
if(abc[j] == page[l])
fla = 0;
break;
if(fla == 1)
abc[count] = page[l];
count++;
if(count == (buf-1))
rep = abc[buf-1];
break;
}
for(l=0;l<buf;l++)
if(rep == buff[l])
ind = l;
break;
printf("\nReplacement = %d",rep);
printf("\nindex = %d",ind);
buff[ind] = page[i];
pagefault++;
void fifo()
int k=0,ptime[25],n,s=0,i,sum=0;
char name[25][25];
float avg;
//clrscr();
scanf ("%d",&n);
for(i=0;i<n;i++)
printf("%d \t",i+1);
scanf("%s",name[i]);
printf("\n \n");
for(i=0;i<n;i++)
printf("%s \t",name[i]);
scanf("%d",&ptime[i]);
}
printf("\n \n");
for(i=0;i<n;i++)
printf("\t %s \t \t %d \n",name[i],ptime[i]);
for(i=0;i<n;i++)
k+=ptime[i];
for(i=0;i<(n-1);i++)
s+=ptime[i];
sum+=s;
avg=(float)sum/n;
printf("%2fmsec",avg);
sum=avg=s=0.0;
for(i=0;i<n;i++)
{
s+=ptime[i];
sum+=s;
avg=(float)sum/n;
printf("%2fmsec",avg);
getch();
void fcfs()
int num,i,buf,page[100],buff[100],j,pagefault,flag =
1,temp,k,l;
//clrscr();
scanf("%d",&num);
for(i=0;i<num;i++)
scanf("%d",&page[i]);
for(j=0;j<buf;j++)
buff[j] = 0;
pagefault = 0;
flag = 1;
k= 0;
for(i=0;i<num;i++)
flag = 1;
for(j=0;j<buf;j++)
if(buff[j] == page[i])
flag = 0;
break;
j = 0;
if(flag == 0)
continue;
}
else
buff[k] = page[i];
k++;
pagefault++;
for(l=0;l<buf;l++)
if(buff[l] != 0)
printf(" %d",buff[l]);
continue;
for(j=0;j<buf-1;j++)
temp = buff[j+1];
buff[j+1] = buff[j];
buff[j] = temp;
buff[buf-1] = page[i];
pagefault++;
for(l=0;l<buf;l++)
if(buff[l] != 0)
printf(" %d",buff[l]);
getch();