Unit 2 Process Processor Memory Management
Unit 2 Process Processor Memory Management
1
UNIT 2: PROCESS,
PROCESSOR AND • A Program does nothing unless its instructions are executed by a CPU. A program in
execution is called a process. In order to accomplish its task, process needs the computer
MEMORY MANAGEMENT resources.
• There may exist more than one process in the system which may require the same
resource at the same time. Therefore, the operating system has to manage all the
processes and the resources in a convenient and efficient way.
• Some resources may need to be executed by one process at one time to maintain the
consistency otherwise the system can become inconsistent and deadlock may occur.
• OS uses the terms job and process almost interchangeably and much prefer the term process.
• The operating system is responsible for the following activities in connection with • A process is a program in execution. A process is more than the
Process Management: program code, which is sometimes known as the text section.
1. Scheduling processes and threads on the CPUs. • It also includes the current activity, as represented by the value
of the program counter and the contents of the processor’s
2. Creating and deleting both user and system processes.
registers.
3. Suspending and resuming processes.
• A process generally also includes the process stack, which
4. Providing mechanisms for process synchronization. contains temporary data (such as function parameters, return
5. Providing mechanisms for process communication. addresses, and local variables), and a data section, which
contains global variables.
• A process may also include a heap(mass), which is memory that
is dynamically allocated during process run time.
5 PROCESS STATE 6 PROCESS CONTROL BLOCK (PCB)
• The objective of multiprogramming is to have some process running at all • Processes migrate among the various queues. The selection process is carried out by the
appropriate scheduler.
times, to maximize CPU utilization.
1. Long term scheduler - is also known as job scheduler. It chooses the processes from the
• The objective of time sharing is to switch the CPU among processes so pool (secondary memory) and keeps them in the ready queue maintained in the primary
memory.
frequently that users can interact with each program while it is running.
Long Term scheduler mainly controls the degree of Multiprogramming.The purpose of long
• To meet these objectives, the process scheduler selects an available term scheduler is to choose a perfect mix of IO bound and CPU bound processes among
process (possibly from a set of several available processes) for program the jobs present in the pool.
• Processes migrate among the various queues. The selection process is carried out by the • Processes migrate among the various queues. The selection process is carried out by the
appropriate scheduler. appropriate scheduler.
2. Short term scheduler - is also known as CPU scheduler. It selects one of the Jobs from the 3. Medium term scheduler - takes care of the swapped out processes. If the running state
ready queue and dispatch to the CPU for the execution. processes needs some IO time for the completion then there is a need to change its state
A scheduling algorithm is used to select which job is going to be dispatched for the from running to waiting.
execution. The Job of the short term scheduler can be very critical in the sense that if it Medium term scheduler is used for this purpose. It removes the process from the running
selects job whose CPU burst time is very high then all the jobs after that, will have to wait state to make room for the other processes. Such processes are the swapped out processes
in the ready queue for a very long time. and this procedure is called swapping.The medium term scheduler is responsible for
suspending and resuming the processes.
11 VARIOUS TIMES RELATED TO THE PROCESS 12 VARIOUS TIMES RELATED TO THE PROCESS
1. Arrival Time: The time at which the process enters into the ready queue is called the arrival time.
2. Burst Time: The total amount of time required by the CPU to execute the whole process is called the Burst Time. This
does not include the waiting time. It is confusing to calculate the execution time for a process even before executing it
hence the scheduling problems based on the burst time cannot be implemented in reality.
3. Completion Time: The Time at which the process enters into the completion state or the time at which the process
completes its execution, is called completion time.
4. Turnaround time: The total amount of time spent by the process from its arrival to its completion, is called
Turnaround time.
5. Waiting Time: The Total amount of time for which the process waits for the CPU to be assigned is called waiting time.
6. Response Time: The difference between the arrival time and the time at which the process first gets the CPU is called
Response Time.
13 CPU SCHEDULING 14 SCHEDULING ALGORITHMS
• In the uniprogrammming systems like MS DOS, when a process waits for any I/O • There are various algorithms which are used by the Operating System to schedule the
operation to be done, the CPU remains idol. processes on the processor in an efficient way.
• In Multiprogramming systems, the Operating system schedules the processes on the CPU • The Purpose of a Scheduling algorithm
to have the maximum utilization of it and this procedure is called CPU scheduling. The • Maximum CPU utilization
Operating System uses various scheduling algorithm to schedule the processes. • Fare allocation of CPU
• This is a task of the short term scheduler to schedule the CPU for the number of • Maximum throughput
processes present in the Job Pool. Whenever the running process requests some IO • Minimum turnaround time
operation then the short term scheduler saves the current context of the process (also • Minimum waiting time
called PCB) and changes its state from running to waiting. • Minimum response time
• First - Come, First – Serve (FCFS) - It is the simplest algorithm to implement. The process with the • First - Come, First – Serve (FCFS)
minimal arrival time will get the CPU first. The lesser the arrival time, the sooner will the process
gets the CPU. It is the non-preemptive type of scheduling.
17 SCHEDULING ALGORITHMS 18 SCHEDULING ALGORITHMS
• SJF is optimal – gives minimum average waiting time for a given set of processes
23 SCHEDULING ALGORITHMS 24
• Paging • Compaction
Paging is the memory management Compaction is a memory management
technique in which secondary memory is
technique in which the free space of a
divided into fixed-size blocks called pages,
and main memory is divided into fixed- running system is compacted, to reduce
size blocks called frames.The Frame has fragmentation problem and improve
the same size as that of a Page.The memory allocation efficiency.
processes are initially in secondary Compaction is used by many modern
memory, from where the processes are operating systems, such as Windows,
shifted to main memory(RAM) when Linux, and Mac OS X.
there is a requirement.
• Segmentation
END OF UNIT 2
Segmentation is another memory management
technique used by operating systems. The
process is divided into segments of different
sizes and then put in the main memory. The
program/process is divided into modules, unlike
paging, in which the process was divided into
fixed-size pages or frames. The corresponding
segments are loaded into the main memory
when the process is executed. Segments
contain the program’s utility functions, main
function, and so on.