Gurpreet Singh RA1805 Roll No 17

Download as doc, pdf, or txt
Download as doc, pdf, or txt
You are on page 1of 99

DESGIN

PROBLEM
-2
SUBJECT CODE: CSE 366

SUBMITTED BY:
SUBMITTED TO:
Name: Gurpreet Singh Lec. Mr.
Ramandeep Singh
Roll No: 17
Department of CSE

Sec No: RA1805 LOVELY


PROFESSIONAL
Group No: G2
UNIVERSITY
Reg. No: 10805721

SERIAL
NO. CONTENTS PAGE
NO:
1. CPU SCHEDULING 8

2. BRIEF INTRODUCTION 8- 9

3. VARIOUS TYPES OF 9- 12

OPERATING SYSTEM
SCHEDULERS.
a)Long Term Scheduler

b)Mid Term Scheduler

c)Short Term Scheduler


Explanation for schedulers
4. DISPATCHER 12

5. SCHEDULING CRITERIA 13- 18

5.1 Scheduling algorithms

5.2 Goals For Scheduling


5.3 Context Switching
5.4 Context Of A Process.
6 PREEMPTIVE VS NON 18- 33

PREEMPTIVE
SCHEDULING.
6.1Types Of Preemptive
Scheduling.
a) Round Robin

b) SRT
c) Priority based preemptive
6.2 Types of Non Preemptive
Scheduling.
a) Fifo
b)Priority Based Non
Preemptive.
c) SJF (SHORTEST JOB FIRST)
7. MULTILEVEL 34- 35

FEEDBACK QUEUE
SCHEDULING.
8. PROS AND CONS OF 35- 40

DIFFERENT SCHEDULING
ALGORITHMS.
8.1 FCFS
8.2 SJF
8.3 FIXED PRIORITY BASED
PREEMPTIVE.
8.4 ROUND ROBIN SCHEDULING.
8.5 MULTILEVEL FEEDBACK
QUEUE SCHEDULING.
9. HOW TO CHOOSE 32- 36

SCHEDULING
ALGORITHMS.
10. OPERATING SYSTEM 40- 41
SCHEDULER
IMPLEMENTATION.
10.1 WINDOWS
10.2 MAC –OS
10.3 LINUX
10.4 FREEBSD
10.5 NETBSD
10.6 SOLARIS
SUMMARY.
11. COMPARISON BETWEEN 41-63

OS SCHEDULERS.
11.1 solaris-2 scheduling.
11.2 windows scheduling.
11.3 linux scheduling.
11.4 symmetric multiprocessing
in XP.
11.5 COMPARISON.
11.6 DIAGRAMETICAL
REPRESENTATION.
12. MEMORY MANAGEMENT. 63- 85

12.1 INTRODUCTION.
a) Requirement.
b)Relocation.
c) Protection.
d)Sharing.
e) Logical organization
f) Physical organization
12.2 DOS MEMORY MANAGER.
12.3 MAC MEMORY MANAGERS.
12.4 MEMORY MANAGEMENT IN
WINDOWS
12.5 MEMORY MANAGEMENT IN
LINUX
12.6 VIRTUAL MEMORY AREAS.
12.7 MAC OS MEMORY
MANAGEMENT.
12.8 FRAGMENTATION.
12.9 SWITCHER.

HOW IS VIRTUAL MEMORY


HANDLES IN MAC OS X?
13. CODE IN C FOR 85- 121
IMPLEMENTATION OF CPU
ALGORITHAMS AND
MEMORY MANAGEMENT
TECHNIQUES.
INTRODUCTION

Scheduling isakeyconceptin computer


multitasking, multiprocessing operating system and real-time
operating system designs. Scheduling refers to the
way processes are assigned to run on the available CPUs,
since there are typically many more processes running than there
are available CPUs. This assignment is carried out by
software’s known as a scheduler and dispatcher.

The scheduler is concerned mainly with:

• CPU utilization - to keep the CPU as busy as possible.

• Throughput - number of processes that complete their


execution per time unit.

• Turnaround Time - Total time between submission of a


process and its completion.

• Waiting time - amount of time a process has been waiting


in the ready queue.

• Response time - amount of time it takes from when a


request was submitted until the first response is produced.

• Fairness - Equal CPU time to each thread.

TYPES OF OPERATING
SYSTEM SCHEDULERS
Operating systems may feature up to 3 distinct types of
schedulers:

• Long-term scheduler .

• Mid-term or medium-term scheduler.

• Short-term scheduler.
EXPLANATION

1. Long-term scheduler
The long-term, or admission, scheduler decides which jobs or
processes are to be admitted to the ready queue,that is, when
an attempt is made to execute a program, its admission to the set
of currently executing processes is either authorized or delayed by
the long-term scheduler.

2. Mid-term scheduler
The mid-term scheduler temporarily removes processes from
main memory and places them on secondary memory (such
as a disk drive) or vice versa. This is commonly referred to as
"swapping out" or "swapping in" (also incorrectly as
"paging out" or "paging in").

The mid-term scheduler may decide to swap


out a process
• A Process which has not been active for some time.
• A process which has a low priority.

• A process which is page faulting frequently.

• A process which is taking up a large amount of


memory in order to free up main memory for other
processes, swapping the process back in later when more
memory is available, or when the process has been unblocked
and is no longer waiting for a resource.

3. Short-term scheduler
The short-term scheduler (also known as the CPU scheduler)
decides which of the ready, in-memory processes are to be
executed (allocated a CPU) next following a clock interrupt,
an IO interrupt, an operating system call or another form
of signal.

Thus the short-term scheduler makes scheduling decisions


much more frequently than the long-term or mid-term
schedulers - a scheduling decision will at a minimum have to be
made after every time slice, and these are very short. This
scheduler can be preemptive, implying that it is capable of
forcibly removing processes from a CPU when it decides to allocate
that CPU to another process, or non-preemptive (also known as
"voluntary" or "co-operative"), in which case the scheduler is
unable to "force" processes off the CPU.

Dispatcher
Another component involved in the CPU-scheduling function
is the dispatcher. The dispatcher is the module that gives control
of the CPU to the process selected by the short-term scheduler.

This function involves the following:

• Switching context

• Switching to user mode

• Jumping to the proper location in the user program to


restart that program

The dispatcher should be as fast as possible, since it is invoked


during every process switch. The time it takes for the dispatcher
to stop one process and start another running is known as the
dispatch latency.

SCHEDULING CRITERIA
Different CPU scheduling algorithms have different
properties, and the choice of a particular algorithm may favor one
class of processes over another. In choosing which algorithm to
use in a particular situation, we must consider the
properties of the various algorithms. Many criteria have been
suggested for comparing CPU scheduling algorithms. Which
characteristics are used for comparison can make a substantial
difference in which algorithm is judged to be best.

The criteria for the CPU scheduling include the following:

• CPU Utilization: We want to keep the CPU as busy as


possible.

• Throughput: If the CPU is busy executing processes, then


work is being done. One measure of work is the number of
processes that are completed per time unit, called throughput.
For long processes, this rate may be one process per hour; for
short transactions, it may be 10 processes per second.

• Turnaround time: From the point of view of a particular


process, the important criterion is how long it takes to execute
that process. The interval from the time of submission of a
process to the time of completion is the turnaround time.

a) Turnaround time is the sum of the periods spent


waiting to get into memory, waiting in the ready queue,
executing on the CPU, and doing I/O.

• Waiting time. The CPU scheduling algorithm does not


affect the amount of the time during which a process executes or
does I/O. it affects only the amount of time that a process spends
waiting in the ready queue. Waiting time is the sum of periods
spend waiting in the ready queue.

• Response time. In an interactive system, turnaround


time may not be the best criterion. Often, a process can
produce some output fairly early and can continue
computing new results while previous results are being
output to the user. Thus, another measure is the time from the
submission of a request until the first response is produced. This
measure, called response time, is the time it takes to start
responding, not the time it takes to output the response. The
turnaround time is generally limited by the speed of the output
device.

SCHEDULING ALGORITHM
A multiprogramming operating system allows more than one
process to be loaded into the executable memory at a time
and for the loaded process to share the CPU using time-
multiplexing. Part of the reason for using multiprogramming is that
the operating system itself is implemented as one or more
processes, so there must be a way for the operating system and
application processes to share the CPU. Another main reason is
the need for processes to perform I/O operations in the
normal course of computation. Since I/O operations
ordinarily require orders of magnitude more time to
complete than do CPU instructions, multiprogramming
systems allocate the CPU to another process whenever a
process invokes an I/O operation.
GOALS FOR SCHEDULING
Make sure your scheduling strategy is good enough with the
following criteria:

• Utilization/Efficiency: keep the CPU busy 100% of the


time with useful work.

• Throughput: maximize the number of jobs processed per


hour.

• Turnaround time: from the time of submission to the time


of completion, minimize the time batch users must wait for
output.

• Waiting time: Sum of times spent in ready queue -


Minimize this.

• Response Time: time from submission till the first


response is produced, minimize response time for interactive
users.

• Fairness: make sure each process gets a fair share of the


CPU.

Context Switching
Typically there are several tasks to perform in a computer
system.

So if one task requires some I/O operation, you want to initiate the
I/O operation and go on to the next task. You will come back to it
later.

This act of switching from one process to another is called a


"Context Switch"

When you return back to a process, you should resume where you
left off. For all practical purposes, this process should never know
there was a switch, and it should look like this was the only process
in the system.

To implement this, on a context switch, you have to do as


given below:

• To save the context of the current process.

• It selects the next process to run.

• It restores the context of this new process.

Context of a process
• Program Counter.

• Stack Pointer.

• Registers.

• Code + Data + Stack (also called Address Space).


• Other state information maintained by the OS for the process
(open files, scheduling info, I/O devices being used etc.).

All this information is usually stored in a structure called Process Control Block
(PCB).

All the above has to be saved and restored.

Non-Pre-emptive Vs
Preemptive Scheduling
• Non-Preemptive: Non-preemptive algorithms are designed
so that once a process enters the running state(is allowed a
process), it is not removed from the processor until it has
completed its service time ( or it explicitly yields the processor).

context_switch() is called only when the process terminates or


blocks.

• Preemptive: Preemptive algorithms are driven by the


notion of prioritized computation. The process with the highest
priority should always be the one currently using the processor. If
a process is currently using the processor and a new process with
a higher priority enters, the ready list, the process on the
processor should be removed and returned to the ready list until
it is once again the highest-priority process in the system.
Context switch() is called even when the process is
running usually done via a timer interrupt.

Types of preemptive scheduling


algorithms:

1) Round Robin method:


CPU is allocated to each job for a fixed time slice in FCFS order.
Round Robin calls for the distribution of the processing time
equitably among all processes requesting the processor. Run
process for one time slice, then move to back of queue. Each
process gets equal share of the CPU. Most systems use some variant
of this.

Choosing Time Slice

What happens if the time slice isn’t chosen carefully?

• For example, consider two processes, one doing 1 ms


computation followed by 10 ms I/O, the other doing all
computation. Suppose we use 20 ms time slice and round-robin
scheduling: I/O process runs at 11/21 speed, I/O devices are only
utilized 10/21 of time.
• Suppose we use 1 ms time slice: then compute-bound process gets
interrupted 9 times unnecessarily before I/O-bound process is runnable

LIMITATION: Round robin assumes that all processes are


equally important; each receives an equal portion of the CPU. This
sometimes produces bad results. Consider three processes that
start at the same time and each requires three time slices to finish.
Using FIFO how long does it take the average job to complete (what
is the average response time)? How about using round robin?

* Process A finishes after 3 slices, B 6, and C 9. The average is


(3+6+9)/3 = 6 slices.
• The process which have large cpu time will get
processed with enough level of difficulty. Thus
this method is not used frequently.

Hence Round Robin is fair, but uniformly inefficient.

Solution: Introduce priority based scheduling.

It is explained here by taking 1 example:


Process Arrival CPU Priorit
Time Time y
P1 0 6 2
P2 1 4 1
P3 1 3 3
p4 3 2 2
p5 3 8 3
P6 4 6 1
P7 5 7 4
P8 5 6 3

Round Robin with time slice=3


Solution: The Gantt chart is drawn as:
Waiting time for P1=0+6=6 ms

Waiting time for P2=(3+20)-1=22 ms

Waiting time for P3= 14-1 =13 ms

Waiting time for P4= 12-3=9 ms

Waiting time for P5= (17+10+6)-3 =30 ms

Waiting time for P6=(6+18)-4=20 ms

Waiting time for P7= (23+10+2)-5=30 ms

Waiting time for P8= (20+10)-5 =25 ms

Average Waiting Time = (6+22+13+9+30+20+30+25)/8


=19.375 ms

Average Turnaround Time =A.V.T + Av. Execution Time

=19.375+ (42)/
8=15.3+5.25=24.62 ms

OR ( method-2)

Average Turnaround Time = (12+ (27-1) + (17-1) + (14-3) +


(41-3) + (30-4) + (42-5) + (36-5) =

=24.625 ms

Average Response Time = 15.3 m

2) Shortest remaining time :It is explained by the example given


below:
Proce Arrival CPU
ss Time Time
X 1 4
Y 0 6
Z 1 3

SRT

Solution: The SRT algorithm is:

The Gantt chart is:

Y Z X y
0 1 4 8
13

Average Waiting Time = (8+0+3)/3 = 3.67 ms

Average Turnaround Time= (13+3+7)/3 = 7.67 ms

Average Response Time= (0+0+3)/3 = 1 ms

3) Priority Based preemptive


The SJF algorithm is special case of the general priority scheduling
algorithm. A priority is associated with each process and the Cpu is
allocates to the process with the highest priority. Equal priority
processes are scheduled in the FCFS order. An SJF is simply a priority
algorithm where the priority p is inverse of the predicted next cpu
burst cycle.

Run highest-priority processes first, use round-robin among


processes of equal priority. Re-insert process in run queue
behind all processes of greater or equal priority.
• It Allows CPU to be given preferentially to important
processes.

• The Scheduler adjusts dispatcher priorities to achieve the


desired overall priorities for the processes, e.g. one process
gets 90% of the CPU.

It is explained by 1 example I have taken


here as given below:
Process Arrival CPU Priorit
Time Time y
P1 0 6 2
P2 7 10 1
P3 4 4 3
p4 1 10 2
p5 2 12 0

Solution:
The Gantt chart is drawn as below:

Waiting time for P1=0+(24-2)=22ms


Waiting time for P2=14-7=7 ms
Waiting time for P3= 38-4 =34 ms
Waiting time for P4= 28-1=27 ms
Waiting time for P5= 2-2 =0 ms
Average Waiting Time = (22+7+34+27+0)/5 =59/5
=18 ms
Average Turnaround Time =A.V.T + Av. Execution
Time
=18+
(42)/5=18+8.4=26.4 ms
Comments: In priority scheduling, processes are allocated to the
CPU on the basis of an externally assigned priority. The key to the
performance of priority scheduling is in choosing priorities for the
processes.

Problem: Priority scheduling may cause low-priority processes to


starve

Solution: (AGING) This starvation can be compensated for if the


priorities are internally computed. Suppose one parameter in the
priority assignment function is the amount of time the process has
been waiting. The longer a process waits, the higher its priority
becomes. This strategy tends to eliminate the starvation problem.

Non preemptive scheduling:


In the non preemptive once the processes get start executing by
the processor we can take back the processor from the process until
the process gets executed. It is only when the process is non
preemptive.
i.e. we can say that It is the scheduling in which the process
os the any processor is not terminated until it is completed.

Types of non preemptive scheduling


algorithms:

First in First Out (FIFO)


This is a Non-Preemptive scheduling algorithm. FIFO strategy
assigns priority to processes in the order in which they
request the processor. The process that requests the CPU first is
allocated the CPU first. When a process comes in, add its PCB to the
tail of ready queue. When running process terminates, dequeue the
process (PCB) at head of ready queue and run it.

In the fcfs algorithm , with this scheme the process that


request the cpu first is allocated to the cpu first. The
implementation of the fcfs policy Is easily managed with a fifo
queue. When a process enter a ready queue its PCB is linked onto
the tail of the queue. When the cpu is free it is allocated to the
process to the head of the queue. The running process is then
removed from the queue. The head for the FCFS scheduling process
is easy to write and understanssd.

Suppose we have a processes as given below in the example


I have taken:

It is explained as given below:


Proce Arrival CPU
ss Time Time
P1 0 6
P2 0 10
P3 1 4
p4 4 10
p5 2 12 The Gantt chart is drawn as below:

Waiting time for P1=0

Waiting time for P2=6

Waiting time for P3= (16-1) =15

Waiting time for P4= (20-4) =16

Waiting time for P5= (30-2) =28

Average Waiting Time = (0+6+15+16+28)/5 =65/5 =13 ms

Average Turnaround Time =A.V.T + Av. Execution Time

=13 +(42)/5=13+8.4=21.4 ms

Comments: While the FIFO algorithm is easy to implement, it


ignores the service time request and all other criteria that may
influence the performance with respect to turnaround or waiting
time.

Problem: One Process can monopolize CPU.


Solution: Limit the amount of time a process can run without a
context switch. This time is called a time slice.

2)Priority Based non preemptive


algorithm
Proce Arrival CPU Prior
ss Time Time ity
P1 0 6 2
P2 7 10 1
P3 4 4 3
p4 1 10 2
p5 2 12 0

Solution:

The Gantt chart is drawn as below:

Waiting time for P1=0=0 ms

Waiting time for P2=18-7=11 ms

Waiting time for P3= 38-4 =34 ms

Waiting time for P4= 28-1=27 ms

Waiting time for P5= 6-2 =4 ms

Average Waiting Time = (0+11+34+27+4)/5 =76/5 =15.2 ms

Average Turnaround Time =A.V.T + Av. Execution Time


=15.2+ (42)/5=15.2+ 8.4 =23.6
ms

2) Shortest Job First (SJF):


In the shortest job first algorithm associate with each process the
length of its next CPU burst. It basically Use these lengths to
schedule the process with the shortest time

SJF is optimal because it gives minimum average waiting time


for a given set of processes.

Maintain the Ready queue in order of increasing job lengths.


When a job comes in, insert it in the ready queue based on its
length. When current process is done, pick the one at the head of
the queue and run it.

This is provably the most optimal in terms of turnaround/response


time.But, how do we find the length of a job? Make an estimate
based on the past behavior.Say the estimated time (burst) for a
process is E0, suppose the actual time is measured to be T0.

Update the estimate by taking a weighted sum of these two

ie. E1 = aT0 + (1-a)E0 and in general, E(n+1) = aTn + (1-a)En


(Exponential average)

if a=0, recent history no weightage


if a=1, past history no weightage.

typically a=1/2.

E(n+1) = aTn + (1-a)aTn-1 + (1-a)^jatn-j + ...


Older information has less weightage

Limitation of SJF:
The difficulty is knowing the length of the next CPU request

1) SJF Non preemptive: I m explaining this here by taking 1


example as given below:

The Gantt chart is drawn as below:

Waiting time for P1=0 ms

Waiting time for P2=10 ms

Waiting time for P3= (6-1) =5 ms

Waiting time for P4= (20-4) =16 ms

Waiting time for P5= (30-2) =28 ms

Average Waiting Time = (0+10+5+16+28)/5 =59/5 =11.8 ms

Average Turnaround Time =A.V.T + Av. Execution Time

=11.8+ (42)/5=11.8+8.4=20.2 ms

Comments: SJF is proven optimal only when all jobs are available
simultaneously.

Problem: SJF minimizes the average wait time because it services


small processes before it services large ones. While it minimizes
average wiat time, it may penalize processes with high service time
requests. If the ready list is saturated, then processes with large
service times tend to be left in the ready list while small processes
receive service. In extreme case, where the system has little idle
time, processes with large service times will never be served. This
total starvation of large processes may be a serious liability of this
algorithm.

Solution: Multi-Level Feedback Queques

Multi-Level Feedback Queue


Several queues arranged in some priority order.

Each queue could have a different scheduling discipline/ time


quantum.

Lower quanta for higher priorities generally.

It is basically Defined by:

• # of queues.

• Scheduling algo for each queue.

• When to upgrade a priority.

• When to demote.

• Attacks both efficiency and response time problems.

• It Gives newly run able process a high priority and a


very short time slice. If process uses up the time slice without
blocking then decrease priority by 1 and double its next time
slice.

• Often implemented by having a separate queue for each


priority.

• How are priorities raised? By 1 if it doesn't use time


slice? What happens to a process that does a lot of
computation when it starts then waits for user input?
Need to boost priority a lot, quickly.

PROS AND CONS OF


DIFFERENT SCHEDULING
ALGORITHM
Scheduling disciplines are algorithms used for distributing
resources among parties which simultaneously and
asynchronously request them. Scheduling disciplines are used
in routers (to handle packet traffic) as well as
in operatingsystems (to share CPU time among
both threads and processes), disk drives (I/O scheduling), printers
(print spooler), most embedded systems, etc.

The main purposes of scheduling algorithms are to


minimize resource starvation and to ensure fairness amongst
the parties utilizing the resources. Scheduling deals with the
problem of deciding which of the outstanding requests is to be
allocated resources. There are many different scheduling
algorithms. In this section, we introduce several of them.

First in first out


Also known as First Come, First Served (FCFS), its the
simplest scheduling algorithm, FIFO simply queues
processes in the order that they arrive in the ready queue.

• Since context switches only occur upon process


termination, and no reorganization of the process queue
is required, scheduling overhead is minimal.

• Throughput can be low, since long processes can hog the


CPU

• Turnaround time, waiting time and response time can be


low for the same reasons above

• No prioritization occurs, thus this system has trouble


meeting process deadlines.

• The lack of prioritization does permit every process to


eventually complete, hence no starvation.

Shortest remaining time


Also known as Shortest Job First (SJF). With this strategy the
scheduler arranges processes with the least estimated processing
time remaining to be next in the queue. This requires advance
knowledge or estimations about the time required for a process to
complete.

• If a shorter process arrives during another process'


execution, the currently running process may be
interrupted, dividing that process into two separate
computing blocks. This creates excess overhead through
additional context switching. The scheduler must also place
each incoming process into a specific place in the queue,
creating additional overhead.

• This algorithm is designed for maximum throughput


in most scenarios.

• Waiting time and response time increase as the


process' computational requirements increase. Since
turnaround time is based on waiting time plus processing
time, longer processes are significantly affected by this.
Overall waiting time is smaller than FIFO, however since no
process has to wait for the termination of the longest process.

• No particular attention is given to deadlines, the


programmer can only attempt to make processes with deadlines
as short as possible.

• Starvation is possible, especially in a busy system with


many small processes being run.

Fixed priority pre-emptive


scheduling
The O/S assigns a fixed priority rank to every process, and the
scheduler arranges the processes in the ready queue in order of
their priority. Lower priority processes get interrupted by incoming
higher priority processes.

• Overhead is neither minimal, nor is it significant.

• FPPS has no particular advantage in terms of throughput


over FIFO scheduling.

• Waiting time and rf

• Response time depend on the priority of the process. Higher


priority processes have smaller waiting and response times.

• Deadlines can be met by giving processes with deadlines a


higher priority.

• Starvation of lower priority processes is possible with large


amounts of high priority processes queuing for CPU time.

Round-robin scheduling
The scheduler assigns a fixed time unit per process, and cycles
through them.

• Round Robin scheduling involves extensive overhead,


especially with a small time unit.

• Balanced throughput between FCFS and SJF, shorter


jobs are completed faster than in FCFS and longer
processes are completed faster than in SJF.
• Fastest average response time, waiting time is
dependent on number of processes, and not average process
length.

• Because of high waiting times, deadlines are rarely met


in a pure Round Robin system.

• Starvation can never occur, since no priority is given. Order


of time unit allocation is based upon process arrival time, similar
to FCFS.

Multilevel queue scheduling


This is used for situations in which processes are easily divided into
different groups. For example, a common division is made between
foreground (interactive) processes and background (batch)
processes. These two types of processes have different response-
time requirements and so may have different scheduling needs.

Overview
Scheduling
CPU Utilizatio
ThroughputTurn-aroundResponseDeadlineStarvation
algorithm n
time time handling free

First In First Out Low Low High Low No Yes

Shortest Job First Medium High Medium Medium No No

Priority based Medium Low High High Yes No


scheduling
Round-robin High Medium Medium High No Yes
scheduling

Multilevel Queue High High Medium Medium Low Yes


scheduling

HOW TO CHOOSE SCHEDULING


ALGORITHM
When designing an operating system, a programmer must consider
which scheduling algorithm will perform best for the use the system
is going to see. There is no universal “best” scheduling
algorithm, and many operating systems use extended or

combinations of the scheduling algorithms above. For


example: Windows NT/XP/Vista uses a multilevel feedback
queue, a combination of fixed priority preemptive scheduling,
round-robin, and first in first out. In this system, processes can
dynamically increase or decrease in priority depending on if it has
been serviced already, or if it has been waiting extensively. Every
priority level is represented by its own queue, with round-robin
scheduling amongst the high priority processes and FIFO among the
lower ones. In this sense, response time is short for most processes,
and short but critical system processes get completed very quickly.
Since processes can only use one time unit of the round robin in the
highest priority queue, starvation can be a problem for longer high
priority processes.
OPERATING SYSTEM
SCHEDULER IMPLEMENTATION

Windows
Very early MS-DOS and Microsoft Windows systems were non-
multitasking, and as such did not feature a scheduler. Windows
3.1x used a non-preemptive scheduler, meaning that it did not
interrupt programs. It relied on the program to end or tell the OS
that it didn't need the processor so that it could move on to another
process. This is usually called cooperative multitasking. Windows 95
introduced a rudimentary preemptive scheduler; however, for
legacy support opted to let 16 bit applications run without
preemption.

Mac OS
Mac OS 9 uses cooperative scheduling for threads, where one
process controls multiple cooperative threads, and also provides
preemptive scheduling for MP tasks. The kernel schedules MP
tasks using a preemptive scheduling algorithm. All Process
Manager Processes run within a special MP task, called the
"blue task". Those processes are scheduled cooperatively, using
a round-robin scheduling algorithm; a process yields control of the
processor to another process by explicitly calling a blocking function
such as WaitNextEvent. Each process has its own copy of the Thread
Manager that schedules that process's threads cooperatively; a
thread yields control of the processor to another thread by
calling YieldToAnyThread or YieldToThread.

Mac OS X uses a multilevel feedback queue, with four


priority bands for threads - normal, system high priority,
kernel mode only, and real-time. Threads are scheduled
preemptively, Mac OS X also supports cooperatively-scheduled
threads in its implementation of the Thread Manager in Carbon.

Linux
From version 2.5 of the kernel to version 2.6, Linux used a multilevel
feedback queue with priority levels ranging from 0-140. 0-99 are
reserved for real-time tasks and 100-140 are considered nice task
levels. For real-time tasks, the time quantum for switching
processes is approximately 200 ms, and for nice tasks
approximately 10 ms. The scheduler will run through the queue of
all ready processes, letting the highest priority processes go first
and run through their time slices, after which they will be placed in
an expired queue. When the active queue is empty the expired
queue will become the active queue and vice versa. From versions
2.6 to 2.6.23, the kernel used an O(1) scheduler. In version 2.6.23,
they replaced this method with the Completely Fair Scheduler that
uses red-black treesinstead of queues.

FreeBSD
FreeBSD uses a multilevel feedback queue with priorities ranging
from 0-255. 0-63 are reserved for interrupts, 64-127 for the top half
of the kernel, 128-159 for real-time user threads, 160-223 for time-
shared user threads, and 224-255 for idle user threads. Also, like
Linux, it uses the active queue setup, but it also has an idle queue.

NetBSD
NetBSD uses a multilevel feedback queue with priorities ranging
from 0-223. 0-63 are reserved for time-shared threads (default,
SCHED_OTHER policy), 64-95 for user threads which entered kernel
space, 96-128 for kernel threads, 128-191 for user real-time threads
(SCHED_FIFO and SCHED_RR policies), and 192-223 for software
interrupts.

Solaris
Solaris uses a multilevel feedback queue with priorities ranging from
0-169. 0-59 are reserved for time-shared threads, 60-99 for system
threads, 100-159 for real-time threads, and 160-169 for low priority
interrupts. Unlike Linux, when a process is done using its time
quantum, it's given a new priority and put back in the queue.

SUMMARY:
Operating SystemPreemptio Algorithm
n

Windows 3.1x None Cooperative Scheduler

Windows 95, 98, Me Half Preemptive for 32-bit processes, Cooperative


Scheduler for 16-bit processes

Windows NT (including 2000, Yes Multilevel feedback queue


XP, Vista, 7, and Server)

Mac OS pre-9 None Cooperative Scheduler

Mac OS 9 Some Preemptive for MP tasks, Cooperative Scheduler for


processes and threads

Mac OS X Yes Multilevel feedback queue

Linux pre-2.6 Yes Multilevel feedback queue

Linux 2.6-2.6.23 Yes O(1) scheduler

Linux post-2.6.23 Yes Completely Fair Scheduler

Solaris Yes Multilevel feedback queue

NetBSD Yes Multilevel feedback queue

FreeBSD Yes Multilevel feedback queue


COMPARISON IN OPERATING
SYSTEM SCHEDULERS

1. Solaris 2 Scheduling
• Priority-based process scheduling

• Classes: real time, system, time sharing, interactive

• Each class has different priority and scheduling algorithm

• Each LWP assigns a scheduling class and priority

• Time-sharing/interactive: multilevel feedback queue

• Real-time processes run before a process in any other class

• System class is reserved for kernel use (paging, scheduler)

• The scheduling policy for the system class does not time-
slice

• The selected thread runs on the CPU until it blocks, uses its
time slices, or is preempted by a higher-priority thread

• Multiple threads have the same priority à RR


Each class includes a set of priorities. But, the scheduler converts the
class-specific priorities into global priorities

2. Windows scheduling
Overview: It Displays all context switches by CPU, as shown in the
following screen shot.
Screen shot of a graph showing CPU scheduling zoomed to 500
microseconds

Graph Type: Event graph

Y-axis Units: CPUs


Required Flags: CSWITCH+DISPATCHER

Events Captured: Context switch events

Legend Description: Shows CPUs on the system.

Graph Description:
Shows all context switches for a time interval aggregated
by CPU. A tooltip displays detailed information on the
context switch including the call stack for the new thread.
Further information on the call stacks is available through
the summary tables. Summary tables are accessed by right
clicking from the graph and choosing Summary Table.

Note Context switches can occur millions of times per


second. In order to display the discrete structure of the
context switch streams it is necessary to zoom to a short
time interval.

Interrupt CPU Usage


Overview: Displays CPU resources consumed by servicing
interrupts, as shown in the following screen shot.
Screen shot of a graph showing time as a percentage spent
servicing interrupts

Graph Type: Usage graph

Y-axis Units: Percentage of CPU usage

Required Flags: INTERRUPT

Events Captured: Service interrupt events

Legend Description: Active CPUs on the system

Graph Description:
This graph displays the percentage of the total CPU
resource each processor spends servicing device interrupts.

Noticable points:
• Priority-based preemptive scheduling

• A running thread will run until it is preempted by a higher-


priority one, terminates, time quantum ends, calls a blocking
system call

• 32-level priority scheme

• Variable (1-15) and real-time (16-31) classes, 0 (memory


manage)

• A queue for each priority. Traverses the set of queues from


highest to lowest until it finds a thread that is ready to run

• Run the idle thread when no ready thread

• Base priority of each priority class

• Initial priority for a thread belonging to that class


• The priority of variable-priority processes will adjust

• Lower (not below base priority) when its time quantum runs
out

• Priority boosts when it is released from a wait operation

• The boost level depends on the reason for wait

• Waiting for keyboard I/O gets a large priority increase

• Waiting for disk I/O gets a moderate priority increase

• Process in the foreground window get a higher priority


Linux Scheduling
• Separate Time-sharing and real-time scheduling
algorithms.

• Allow only processes in user mode to be preempted.

• A process may not be preempted while it is running in


kernel mode, even if a real-time process with a higher priority is
available to run.

• Soft real-time system.

• Time-sharing: Prioritized, credit-based scheduling.

• The process with the most credits is selected.

• A timer interrupt occurs à the running process loses one


credit.

• Zero credit è select another process.

• No runnable processes have credits è re-credit ALL


processes.

• CREDITS = CREDITS * 0.5 + PRIORITY.

• Priority: real-time > interactive > background.

• Real-time scheduling.
• Two real-time scheduling classes: FCFS (non-preemptive)
and RR (preemptive).

• PLUS a priority for each process.

• Always runs the process with the highest priority.

• Equal priorityèruns the process that has been waiting


longest .

Symmetric multiprocessing in XP

Symmetric multiprocessing (SMP) is a technology that


allows a computer to use more than one processor. The most
common configuration of an SMP computer is one that uses
two processors. The two processors are used to complete
your computing tasks faster than a single processor. (Two
processors aren't necessarily twice as fast as a single
processor, though.)

In order for a computer to take advantage of a multiprocessor setup,


the software must be written for use with an SMP system. If a
program isn't written for SMP, it won't take advantage of SMP. Not
every program is written for SMP; SMP applications, such as image-
editing programs, video-editing suites, and databases, tend to be
processor intensive.
Operating systems also need to be written for SMP in order to use
multiple processors. In the Windows XP family, only XP Professional
supports SMP; XP Home does not. If you're a consumer with a dual-
processor PC at home, you have to buy XP Professional. Windows XP
Advanced Server also supports SMP.

In Microsoft's grand scheme, XP Professional is meant to replace


Windows 2000, which supports SMP. In fact, XP Professional uses the
same kernel as Windows 2000. XP Home is designed to replace
Windows Me as the consumer OS, and Windows Me does not support
SMP.

The difference between XP Professional and XP Home is more than


just $100 and SMP support. XP Professional has plenty of other
features not found in XP Home; some you'll use, others you won't
care about.

Symmetric multiprocessing (SMP) is a technology that allows a


computer to use more than one processor. The most common
configuration of an SMP computer is one that uses two processors.
The two processors are used to complete your computing tasks
faster than a single processor. (Two processors aren't necessarily
twice as fast as a single processor, though.)

In order for a computer to take advantage of a


multiprocessor setup, the software must be written for use
with an SMP system. If a program isn't written for SMP, it won't
take advantage of SMP.
Not every program is written for SMP; SMP applications, such as
image-editing programs, video-editing suites, and databases, tend
to be processor intensive.

Operating systems also need to be written for SMP in order to use


multiple processors. In the Windows XP family, only XP Professional
supports SMP; XP Home does not. If you're a consumer with a dual-
processor PC at home, you have to buy XP Professional. Windows XP
Advanced Server also supports SMP.

In Microsoft's grand scheme, XP Professional is meant to replace


Windows 2000, which supports SMP. In fact, XP Professional uses the
same kernel as Windows 2000. XP Home is designed to replace
Windows Me as the consumer OS, and Windows Me does not support
SMP.

The difference between XP Professional and XP Home is more than


just $100 and SMP support. XP Professional has plenty of other
features not found in XP Home; some you'll use, others you won't
care about.

COMPARISON
1) Solaris 2 Uses priority-based process scheduling.
2) Windows 2000 uses a priority-based preemptive scheduling
algorithm.
3) Linux provides two separate process-scheduling algorithms: one is
designed for time-sharing processes for fair preemptive scheduling
among multiple processes; the other designed for real-time tasks.
a) For processes in the time-sharing class Linux uses a prioritized
credit-based algorithm.
b) Real-time scheduling: Linux implements two real-time scheduling
classes namely FCFS (First come first serve) and RR (Round Robin).

DIAGRAMETICAL REPRESENTATION

Solaris Scheduling
Windows XP Scheduling
Linux Scheduling

• Constant order O(1) scheduling time

• Two priority ranges: time-sharing and real-time

• Real-time range from 0 to 99 and nice value from 100 to


140

Priorities and Time-slice length


List of Tasks Indexed According to Priorities
MEMORY MANAGEMENT

INTRODUCTION
Memory management is the act of managing computer
memory. In its simpler forms, this involves providing ways to
allocate portions of memory to programs at their request,
and freeing it for reuse when no longer needed. The
management of main memory is critical to the computer
system.

Virtual memory systems separate the memory addresses used by a


process from actual physical addresses, allowing separation of
processes and increasing the effectively available amount of
RAM using disk swapping. The quality of the virtual memory
manager can have a big impact on overall system performance.

Garbage collection is the automated allocation and


deallocation of computer memory resources for a program.

This is generally implemented at the programming language level


and is in opposition tomanual memory management, the explicit
allocation and deallocation of computer memory resources. Region-
based memory management is an efficient variant of explicit
memory management that can deallocate large groups of objects
simultaneously.
Requirements
Memory management systems on multi-tasking operating
systems usually deal with the following issues.

Relocation
In systems with virtual memory, programs in memory must be able
to reside in different parts of the memory at different times. This is
because when the program is swapped back into memory after
being swapped out for a while it can not always be placed in the
same location. The virtual memory management unit must also deal
with concurrency. Memory management in the operating system
should therefore be able to relocate programs in memory and
handle memory references and addresses in the code of the
program so that they always point to the right location in memory.

Protection
Processes should not be able to reference the memory for another
process without permission. This is called memory protection, and
prevents malicious or malfunctioning code in one program from
interfering with the operation of other running programs.

Sharing
Even though the memory for different processes is normally
protected from each other, different processes sometimes need to
be able to share information and therefore access the same part of
memory. Shared memory is one of the fastest techniques for Inter-
process communication.

Logical organization
Programs are often organized in modules. Some of these modules
could be shared between different programs, some are read only
and some contain data that can be modified. The memory
management is responsible for handling this logical organization
that is different from the physical linear address space. One way to
arrange this organization is segmentation.

Physical Organization
Memory is usually divided into fast primary storage and
slow secondary storage. Memory management in the operating
system handles moving information between these two levels of
memory.

DOS memory managers

In addition to standard memory management, the 640 KB barrier


of MS-DOS and compatible systems led to the development of
programs known as memory managers when PC main memories
started to be routinely larger than 640 KB in the late 1980s
(see conventional memory). These move portions of the operating
system outside their normal locations in order to increase the
amount of conventional or quasi-conventional memory available to
other applications. Examples are EMM386, which was part of the
standard installation in DOS's later versions, and QEMM. These
allowed use of memory above the 640 KB barrier, where memory
was normally reserved for RAMs, and high and upper memory.

Mac Memory Managers

In any program you write, you must ensure that you manage
resources effectively and efficiently. One such resource is your
program’s memory. In an Objective-C program, you must make sure
that objects you create are disposed of when you no longer need
them.

In a complex system, it could be difficult to determine exactly


when you no longer need an object. Cocoa defines some rules
and principles that help making that determination easier.

Important: In Mac OS X v10.5 and later, you can use automatic


memory management by adopting garbage collection. This is
described in Garbage Collection Programming Guide. Garbage
collection is not available on iOS.

• “Memory Management Rules” summarizes the rules for


object ownership and disposal.

• “Object Ownership and Disposal” describes the primary


object-ownership policy.
• “Practical Memory Management” gives a practical
perspective on memory management.

• “Autorelease Pools” describes the use of autorelease


pools—a mechanism for deferred deallocation—in Cocoa
programs.

• “Accessory Methods” describes how to implement


accessor methods.

• “Implementing Object Copy” discusses issues related to


object copying, such as deciding whether to implement a deep or
shallow copy and approaches for implementing object copy in
your subclasses.

• “Memory Management of Core Foundation Objects in


Cocoa” gives guidelines and techniques for memory
management of Core Foundation objects in Cocoa code.

• “Memory Management of Nib Objects” discusses


memory management issues related to nib files.

Memory Management in
WINDOWS
This is one of three related technical articles—"Managing
Virtual Memory," "Managing Memory-Mapped Files," and
"Managing Heap Memory"—that explain how to manage memory
in applications for Windows.

In each article, this introduction identifies the basic memory


components in the Windows programming model and
indicates which article to reference for specific areas of
interest.

The first version of the Microsoft Windows operating system


introduced a method of managing dynamic memory based
on a single global heap, which all applications and the system
share, and multiple, private local heaps, one for each application.
Local and global memory management functions were also
provided, offering extended features for this new memory
management system.

More recently, the Microsoft C run-time (CRT) libraries were


modified to include capabilities for managing these heaps in
Windows using native CRT functions such as malloc and free.
Consequently, developers are now left with a choice—learn the new
application programming interface (API) provided as part of
Windows or stick to the portable, and typically familiar, CRT
functions for managing memory in applications written for Windows.

The Windows API offers three groups of functions for


managing memory in applications: memory-mapped file
functions, heap memory functions, and virtual memory
functions.
Figure 1. The Windows API provides different levels of
memory management for versatility in application
programming.

Table 1. Memory Management Functions

Memory setSystem resource affectedRelated technical article


Virtual memory functionsA process' virtual address space "Managing Virtual Memory"
System pagefile
System memory
Hard disk space
Memory-mapped fileA process's virtual address space "Managing Memory-Mapped
functions System pagefile Files"
Standard file I/O
System memory
Hard disk space
Heap memory functionsA process's virtual address space "Managing Heap Memory"
System memory
Process heap resource structure
Global heap memory A process's heap resource "Managing Heap Memory"
functions structure
Local heap memory A process's heap resource "Managing Heap Memory"
functions structure
C run-time reference A process's heap resource "Managing Heap Memory"
library structure
Memory Management in Linux
Rather than describing the theory of memory management
in operating systems, this section tries to pinpoint the main
features of the Linux implementation. Although you do not
need to be a Linux virtual memory guru to implement mmap, a
basic overview of how things work is useful. What follows is a fairly
lengthy description of the data structures used by the kernel to
manage memory. Once the necessary background has been
covered, we can get into working with these structures.

Address Types
Linux is, of course, a virtual memory system, meaning that
the addresses seen by user programs do not directly
correspond to the physical addresses used by the hardware.
Virtual memory introduces a layer of indirection that allows a
number of nice things. With virtual memory, programs running
on the system can allocate far more memory than is physically
available; indeed, even a single process can have a virtual address
space larger than the system's physical memory. Virtual memory
also allows the program to play a number of tricks with the
process's address space, including mapping the program's
memory to device memory.
Thus far, we have talked about virtual and physical addresses,
but a number of the details have been glossed over. The Linux
system deals with several types of addresses, each with its own
semantics. Unfortunately, the kernel code is not always very clear
on exactly which type of address is being used in each situation, so
the programmer must be careful.

The following is a list of address types used in


Linux. Figure 15-1 shows how these address types
relate to physical memory.

User virtual addresses


These are the regular addresses seen by user-space programs. User
addresses are either 32 or 64 bits in length, depending on the
underlying hardware architecture, and each process has its own
virtual address space.

Physical addresses: The addresses used between the


processor and the system's memory. Physical addresses are 32- or
64-bit quantities; even 32-bit systems can use larger physical
addresses in some situations.

Bus addresses
The addresses used between peripheral buses and memory. Often, they are the
same as the physical addresses used by the processor, but that is not
necessarily the case. Some architectures can provide an I/Omemory
management unit (IOMMU) that remaps addresses between a bus and main
memory.

Kernel logical addresses


These make up the normal address space of the kernel.
These addresses map some portion (perhaps all) of main
memory and are often treated as if they were physical
addresses. On most architectures, logical addresses and
their associated physical addresses differ only by a constant
offset.

Logical addresses use the hardware's native pointer size


and, therefore, may be unable to address all of physical
memory on heavily equipped 32-bit systems. Logical addresses
are usually stored in variables of type unsigned long or void *.
Memory returned from kmalloc has a kernel logical address.

Kernel virtual addresses


Kernel virtual addresses are similar to logical addresses in that they
are a mapping from a kernel-space address to a physical address.
Kernel virtual addresses do not necessarily have the linear, one-to-
one mapping to physical addresses that characterize the logical
address space, however. All logical addresses are kernel virtual
addresses, but many kernel virtual addresses are not logical
addresses.

For example: Memory allocated by vmalloc has a virtual address


(but no direct physical mapping). The kmap function (described later
in this chapter) also returns virtual addresses. Virtual addresses are
usually stored in pointer variables.

DIAGRAM: Address types used in Linux


If you have a logical address, the macro _ _pa( ) (defined
in <asm/page.h>) returns its associated physical address. Physical
addresses can be mapped back to logical addresses with _ _va( ),
but only for low-memory pages.

Different kernel functions require different types of addresses. It


would be nice if there were different C types defined, so that the
required address types were explicit, but we have no such luck. In
this chapter, we try to be clear on which types of addresses are
used where.

Physical Addresses and Pages


Physical memory is divided into discrete units called pages. Much of
the system's internal handling of memory is done on a per-page
basis. Page size varies from one architecture to the next, although
most systems currently use 4096-byte pages. The
constant PAGE_SIZE (defined in <asm/page.h>) gives the page size
on any given architecture.
If you look at a memory address virtual or physical it is
divisible into a page number and an offset within the page.
If 4096-byte pages are being used AS:

For example: The 12 least-significant bits are the offset, and the
remaining, higher bits indicate the page number. If you discard the
offset and shift the rest of an offset to the right, the result is called
a page frame number (PFN). Shifting bits to convert between page
frame numbers and addresses is a fairly common operation; the
macro PAGE_SHIFT tells how many bits must be shifted to make this
conversion.

Virtual Memory Areas


The virtual memory area (VMA) is the kernel data structure used to
manage distinct regions of a process's address space. A VMA
represents a homogeneous region in the virtual memory of a
process: a contiguous range of virtual addresses that have the same
permission flags and are backed up by the same object (a file, say,
or swap space). It corresponds loosely to the concept of a
"segment," although it is better described as "a memory object with
its own properties." The memory map of a process is made up of (at
least) the following areas:

• An area for the program's executable code (often called


text)

• Multiple areas for data, including initialized data (that which


has an explicitly assigned value at the beginning of execution),
uninitialized data (BSS),[3] and the program stack
• One area for each active memory mapping

The memory areas of a process can be seen by looking


in /proc/<pid/maps> (in which pid, of course, is replaced by a
process ID). /proc/self is a special case of /proc/pid, because it
always refers to the current process. As an example, here are a
couple of memory maps (to which we have added short comments
in italics):

# cat /proc/1/maps look at init


08048000-0804e000 r-xp 00000000 03:01 64652 /sbin/init text
0804e000-0804f000 rw-p 00006000 03:01 64652 /sbin/init data
0804f000-08053000 rwxp 00000000 00:00 0 zero-mapped BSS
40000000-40015000 r-xp 00000000 03:01 96278 /lib/ld-2.3.2.so
text
40015000-40016000 rw-p 00014000 03:01 96278 /lib/ld-2.3.2.so
data
40016000-40017000 rw-p 00000000 00:00 0 BSS for ld.so
42000000-4212e000 r-xp 00000000 03:01 80290 /lib/tls/libc-
2.3.2.so text
4212e000-42131000 rw-p 0012e000 03:01 80290 /lib/tls/libc-
2.3.2.so data
42131000-42133000 rw-p 00000000 00:00 0 BSS for libc
bffff000-c0000000 rwxp 00000000 00:00 0 Stack segment
ffffe000-fffff000 ---p 00000000 00:00 0 vsyscall page

# rsh wolf cat /proc/self/maps #### x86-64 (trimmed)


00400000-00405000 r-xp 00000000 03:01 1596291 /bin/cat
text
00504000-00505000 rw-p 00004000 03:01 1596291 /bin/cat
data
00505000-00526000 rwxp 00505000 00:00 0 bss
3252200000-3252214000 r-xp 00000000 03:01 1237890 /lib64/ld-
2.3.3.so
3252300000-3252301000 r--p 00100000 03:01 1237890 /lib64/ld-
2.3.3.so
3252301000-3252302000 rw-p 00101000 03:01 1237890 /lib64/ld-
2.3.3.so
7fbfffe000-7fc0000000 rw-p 7fbfffe000 00:00 0 stack
ffffffffff600000-ffffffffffe00000 ---p 00000000 00:00 0 vsyscall

Mac OS memory
management

"About This Computer" Mac OS 9.1 window showing the memory


consumption of each open application and the system software
itself.

Historically, the Mac OS used a form of memory management that


has fallen out of favour in modern systems. Criticism of this
approach was one of the key areas addressed by the change to Mac
OS X.

The original problem for the designers of the Macintosh was how to
make optimum use of the 128 KB of RAM that the machine was
equipped with.[1] Since at that time the machine could only run
one application program at a time, and there was
no fixed secondary storage, the designers implemented a simple
scheme which worked well with those particular constraints.
However, that design choice did not scale well with the development
of the machine, creating various difficulties for both programmers
and users.

FRAGMENTATION
The chief worry of the original designers appears to have
been fragmentation - that is, repeated allocation and
deallocation of memory through pointers leads to many
small isolated areas of memory which cannot be used
because they are too small, even though the total free
memory may be sufficient to satisfy a particular request for
memory.

To solve this: Apple designers used the concept of a


relocatable handle, a reference to memory which allowed
the actual data referred to be moved without invalidating
the handle.

Apple's scheme was simple - a handle was simply a pointer into a


(non relocatable) table of further pointers, which in turn pointed to
the data. If a memory request required compaction of memory, this
was done and the table, called the master pointer block, was
updated. The machine itself implemented two areas in the machine
available for this scheme - the system heap (used for the OS), and
the application heap. As long as only one application at a time was
run, the system worked well. Since the entire application heap was
dissolved when the application quit, fragmentation was minimized.
SWITCHER
The situation worsened with the advent of Switcher, which
was a way for the Mac to run multiple applications at once.
This was a necessary step forward for users, who found the
one-app-at-a-time approach very limiting. However, because
Apple was now committed to its memory management model, as
well as compatibility with existing applications, it was forced to
adopt a scheme where each application was allocated its own heap
from the available RAM. The amount of actual RAM allocated to each
heap was set by a value coded into each application, set by the
programmer. Invariably this value wasn't enough for particular kinds
of work, so the value setting had to be exposed to the user to allow
them to tweak the heap size to suit their own requirements. This
exposure of a technical implementation detail was very
much against the grain of the Mac user philosophy. Apart
from exposing users to esoteric technicalities, it was inefficient,
since an application would grab (unwillingly) all of its allotted RAM,
even if it left most of it subsequently unused. Another application
might be memory starved, but was unable to utilise the free
memory "owned" by another application.

How is Virtual Memory Handled in Mac


OS X?
Memory, or RAM, is handled differently in Mac OS X than it
was in earlier versions of the Mac OS. In earlier versions of
the Mac OS, each program had assigned to it an amount of
RAM the program could use. Users could turn on Virtual
Memory, which uses part of the system's hard drive as extra
RAM, if the system needed it.

In contrast, Mac OS X uses a completely different memory


management system. All programs can use an almost unlimited
amount of memory, which is allocated to the application on an as-
needed basis. Mac OS X will generously load as much of a
program into RAM as they can, even parts that may not
currently be in use.

This may inflate the amount of actual RAM being used by the
system. When RAM is needed, the system will swap or page out
those pieces not needed or not currently in use. It is
important to bear this in mind because a casual examination of
memory usage with the top command via the Terminal application
will reveal large amounts of RAM being used by applications. (The
Terminal application allows users to access the UNIX operating
system which is the foundation of Mac OS X.) When needed, the
system will dynamically allocate additional virtual memory so there
is no need for users try to tamper with how the system handles
additional memory needs. However, there is no substitute for having
additional physical RAM.

Most Macintoshes produced in the past few years have


shipped with either 128 or 256 MB of RAM. Although Apple
claims that the minimum amount of RAM that's needed to run Mac
OS X is 128 MB, users will find having at least 256 MB is necessary
to work in a productive way and having 512 MB is preferable.

Starting with Mac OS 10.4 (Tiger) the minimum will be raised


to 256 MB of RAM. Most new Macintoshes are shipping with
512 MB of RAM. For systems which have only 256 MB of RAM it is
advisable for users to have at least 512 MB of RAM in order to run
applications effectively.

Mac OS 10.5 (Leopard) requires at least 512 MB of RAM. Most


users will find that a minimum of 1 GB of RAM is desirable. Less
than 1 GB means the system will have to do make use of
virtual memory which will adversely affect system
performance.

For CPU Scheduling and Memory Management


#include<stdio.h>

#include<conio.h>

void roundrobin();
void fifo();

void prioritynonpre();

void sjf();

void fcfs();

void lru();

int main()

int choice1,choice2,choice3,choice4,choice5;

while(1)

//clrscr();

printf("\n\n\t ***** Welcome To CPU SCHEDULING and


MEMEORY MANAGEMENT ALGO *****");

printf("\n\n Enter your choice : \n 1.For CPU Scheduling


algorithms\n 2.For Memory Management algorithms\n 0.For
EXIT \n Enter Your Choice: ");

scanf("%d",&choice1);

if(choice1 == 1)

//clrscr();
printf("\n\n Enter your choice:\n 1.For Pre-emptive\n
2.For Non-Preemptive\n 0.For To Exit \n Enter Your Choice:");

scanf("%d",&choice2);

if(choice2 == 1)

//clrscr();

printf("Enter your choice :\n 1.For Round Robin\n 0.For


To Exit\n Enter Your Choice:");

scanf("%d",&choice3);

if(choice3 == 1)

roundrobin();

else if(choice3 == 0)

break;

else

printf("\n\n\t ***** INVALID INPUT *****");

printf("\n\t Press any key to continue......");

getch();
}

else if(choice2 == 2)

//clrscr();

printf("\n\n Enter your choice:\n 1.For FCFS\n 3.For SJF\n


0.For to Exit \n Enter Your Choice:");

scanf("%d",&choice4);

if(choice4 == 1)

fifo();

else if(choice4 == 3)

sjf();

else if(choice4 == 0)

break;

else

printf("\n\n\t ***** INVALID INPUT *****");

printf("\n\t Press any key to continue......");


getch();

else if(choice2 == 0)

break;

else

printf("\n\n\t ***** INVALID INPUT *****");

printf("\n\t Press any key to continue......");

getch();

else if(choice1 == 2)

//clrscr();

printf("Enter your choice:\n 1.For FIFO ALGORITHM\n


2.For LRU ALGORITHM\n 0.For To Exit\n Enter Your Choice:");

scanf("%d",&choice5);

if(choice5 == 1)

fcfs();

}
else if(choice5 == 2)

lru();

else if(choice5 == 0)

break;

else

printf("\n\n\t ***** INVALID INPUT *****");

printf("\n\t Press any key to continue......");

getch();

else if(choice1 == 0)

break;

else

printf("\n\n\t ***** INVALID INPUT *****");

printf("\n\t Press any key to continue......");


getch();

getch();

return 0;

void sjf()

int burst[5],arrival[5],done[5],waiting[5];

int i,j,k,l=0,sum,total,min,max;

int temp;

float awt = 0.0;

sum = 0;

printf("\n\n\t\t ***** This SJF is for 5 Processes *****");

printf("\n\n\tEnter the details of the processes ");

for(i=0;i<5;i++)

printf("\n\n\tProcess %d\n",i+1);

printf("\nEnter the burst time : ");


scanf("%d",&burst[i]);

printf("Enter the arrival time : ");

scanf("%d",&arrival[i]);

done[i] = 0;

waiting[i] = 0;

for(i=0;i<5;i++)

sum = sum + burst[i];

for(i=0;i<sum;)

min = sum;

for(j=0;j<5;j++)

if(burst[j] < min && done[j] == 0 && arrival[j] <= i)

min = burst[j];

l = j;
}

printf("\nProcessing process %d",l+1);

printf(" i = %d",i);

printf(" arrival = %d",arrival[l]);

temp = i - arrival[l];

i = i + burst[l];

done[l] = 1;

waiting[l] = temp;

awt = 0.0F;

printf("\nThe respective waiting times are : ");

for(i=0;i<5;i++)

printf("\np%d = %d",i+1,waiting[i]);

awt = awt + waiting[i];

awt = awt/5;

printf("\n\nAWT = %f",awt);
/* ****************************************8888

int burst[4],arrival[4],done[4],waiting[4];

int i,j,sum,min;

int temp;

sum = 0;

printf("\n\n\t Enter the details of the processes:");

for(i=0;i<4;i++)

printf("\n\n\t Process %d\n",i+1);

printf("\n Enter the burst time:");

scanf("%d",&burst[i]);

printf("\n Enter the arrival time:");

scanf("%d",&arrival[i]);
done[i] = 0;

waiting[i] = 0;

for(i=0;i<4;i++)

sum = sum + burst[i];

printf("\n Sum = %d",sum);

for(i=0;i<=sum;i++)

printf(" i = %d ",i);

min = sum;

for(j=0;j<4;j++)

if(burst[j] < min && done[j] == 0 && arrival[j] <= i)

min = burst[j];

}
temp = i - arrival[j];

i = i + burst[j];

done[j] = 1;

waiting[j] = temp;

printf("\ntemp = %d",temp);

for(i=0;i<4;i++)

printf("\n\na(%d) = %d",i+1,waiting[i]);

}*/

printf("\n\n Press any key to continue.....");

getch();

void roundrobin()

int burst[4],arrival[4],lefttime[4],waiting[4],last[4];

int i,j,sum;
int gap=0;

float awt=0;

sum = 0;

// clrscr();

printf("\n\n *** This ROUNDROBIN ALGO works for 4


processes ***");

printf("\n\n\t Enter the details of the processes:");

for(i=0;i<4;i++)

printf("\n\n\t Process %d\n",i+1);

printf("\n Enter the burst time:");

scanf("%d",&burst[i]);

printf("\n Enter the arrival time:");

scanf("%d",&arrival[i]);
lefttime[i] = 0;

waiting[i] = 0;

last[i] = 0;

printf("\n Enter the interval time:");

scanf("%d",&gap);

for(i=0;i<4;i++)

sum = sum + burst[i];

//printf("\nsum = %d",sum);

for(i=0;i<4;i++)

lefttime[i] = burst[i];

}
j=0;

for(i=0;i<=sum;i++)

if(lefttime[j] > 0 && arrival[j] < i )

if(burst[j] < gap)

//printf("\nProcessing p%d in less",j);

lefttime[j] = 0;

waiting[j] = waiting[j] + (i - last[j]);

last[j] = i;

//printf(" Waiting = %d",waiting[j]);

i = i + lefttime[j];

else if(burst[j] > gap)

//printf("\nProcessing p%d in more",j);

lefttime[j] = lefttime[j] - gap;

waiting[j] = waiting[j] + (i - last[j]);


last[j] = i;

//printf(" Waiting = %d",waiting[j]);

i = i + gap;

if(j<4)

j++;

else

j=0;

for(i=0;i<4;i++)

printf("\n\n Waiting time for p%d = %d",i+1,waiting[i]);

awt = awt + waiting[i];

awt = awt/4;
printf("\n\n The average waiting time is %.3f",awt);

printf("\n\n Press any key to continue.....");

getch();

void lru()

int num,i,buf,page[100],buff[100],j,pagefault,rep,flag =
1,ind,abc[100];

int count;

int l,k,fla;

// clrscr();

printf("\n\n Enter the number of paging sequence you want to


enter:");

scanf("%d",&num);

printf("\n Enter the paging sequence:\n");


for(i=0;i<num;i++)

printf("\n %d. ",i+1);

scanf("%d",&page[i]);

printf("\n\nEnter the buffer size : ");

scanf("%d",&buf);

for(j=0;j<buf;j++)

buff[j] = 0;

pagefault = 0;

flag = 1;

count = 0;

k = 0;
for(i=0;i<num;i++)

flag = 1;

for(j=0;j<buf;j++)

if(buff[j] == page[i])

flag = 0;

break;

j = 0;

if(flag == 0)

continue;

else
{

printf("\n *** I m here ***");

if(k < buf)

buff[k] = page[i];

k++;

pagefault++;

printf("\nNow pages are : "); //%d %d %d


",buff[0],buff[1],buff[2]);

for(l=0;l<buf;l++)

if(buff[l] != 0)

printf(" %d",buff[l]);

continue;

count = 0;

fla = 1;

for(j=0;j<buf;j++)

{
abc[j] = 0;

for(l=i;l>=0;l++)

for(j=buf-1;j>=0;j--)

if(abc[j] == page[l])

fla = 0;

break;

if(fla == 1)

abc[count] = page[l];

count++;

if(count == (buf-1))

rep = abc[buf-1];

break;
}

for(l=0;l<buf;l++)

if(rep == buff[l])

ind = l;

break;

printf("\nReplacement = %d",rep);

printf("\nindex = %d",ind);

buff[ind] = page[i];

pagefault++;

printf("\nNow pages are : %d %d %d ",buff[0],buff[1],buff[2]);

printf("\n\nPage faults = %d",pagefault);

printf("\n\nPress any key to continue.....");


getch();

void fifo()

int k=0,ptime[25],n,s=0,i,sum=0;

char name[25][25];

float avg;

//clrscr();

printf ("\n\nEnter the no. of process:\t");

scanf ("%d",&n);

for(i=0;i<n;i++)

printf("\n Enter the name for processes:\t");

printf("%d \t",i+1);

scanf("%s",name[i]);

printf("\n \n");

for(i=0;i<n;i++)

printf("\n Enter the process time:\t");

printf("%s \t",name[i]);

scanf("%d",&ptime[i]);

}
printf("\n \n");

printf("\n Process - Name \t Process - Time \n");

for(i=0;i<n;i++)

printf("\t %s \t \t %d \n",name[i],ptime[i]);

printf("\n \n ***** FIFO SCHEDULING IMPLEMENTATION ***** \n


\n");

for(i=0;i<n;i++)

printf("\n process %s from %d to %d \n", name[i],k,(k+ptime[i]));

k+=ptime[i];

for(i=0;i<(n-1);i++)

s+=ptime[i];

sum+=s;

avg=(float)sum/n;

printf("\n\n Average waiting time:\t");

printf("%2fmsec",avg);

sum=avg=s=0.0;

for(i=0;i<n;i++)

{
s+=ptime[i];

sum+=s;

avg=(float)sum/n;

printf("\n Turn around time is:\t");

printf("%2fmsec",avg);

printf("\n\n Press any key to continue.....");

getch();

void fcfs()

int num,i,buf,page[100],buff[100],j,pagefault,flag =
1,temp,k,l;

//clrscr();

printf("\n Enter the number of paging sequence you want to


enter:");

scanf("%d",&num);

printf("\n Enter the paging sequence:\n");

for(i=0;i<num;i++)

printf("\n %d. ",i+1);

scanf("%d",&page[i]);

printf("\n\n Enter the buffer size:");


scanf("%d",&buf);

for(j=0;j<buf;j++)

buff[j] = 0;

pagefault = 0;

flag = 1;

k= 0;

for(i=0;i<num;i++)

flag = 1;

for(j=0;j<buf;j++)

if(buff[j] == page[i])

flag = 0;

break;

j = 0;

if(flag == 0)

continue;
}

else

if(k < buf)

buff[k] = page[i];

k++;

pagefault++;

printf("\n Now pages are : "); //%d %d %d


",buff[0],buff[1],buff[2]);

for(l=0;l<buf;l++)

if(buff[l] != 0)

printf(" %d",buff[l]);

continue;

for(j=0;j<buf-1;j++)

temp = buff[j+1];
buff[j+1] = buff[j];

buff[j] = temp;

buff[buf-1] = page[i];

pagefault++;

printf("\n Now pages are : "); //%d %d %d


",buff[0],buff[1],buff[2]);

for(l=0;l<buf;l++)

if(buff[l] != 0)

printf(" %d",buff[l]);

printf("\n\n\t\t Page faults = %d",pagefault);

printf("\n\t\t Page replacements are = %d",pagefault-buf);

printf("\n\t\t Press any key to continue.....");

getch();

You might also like